00:00:00.000 Started by upstream project "autotest-per-patch" build number 130920 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.073 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.074 The recommended git tool is: git 00:00:00.074 using credential 00000000-0000-0000-0000-000000000002 00:00:00.077 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.121 Fetching changes from the remote Git repository 00:00:00.123 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.186 Using shallow fetch with depth 1 00:00:00.186 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.186 > git --version # timeout=10 00:00:00.241 > git --version # 'git version 2.39.2' 00:00:00.241 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.278 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.278 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.591 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.606 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.620 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:07.620 > git config core.sparsecheckout # timeout=10 00:00:07.634 > git read-tree -mu HEAD # timeout=10 00:00:07.651 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:07.670 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:07.671 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:07.754 [Pipeline] Start of Pipeline 00:00:07.765 [Pipeline] library 00:00:07.767 Loading library shm_lib@master 00:00:07.767 Library shm_lib@master is cached. Copying from home. 00:00:07.785 [Pipeline] node 00:00:07.793 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.795 [Pipeline] { 00:00:07.806 [Pipeline] catchError 00:00:07.808 [Pipeline] { 00:00:07.819 [Pipeline] wrap 00:00:07.828 [Pipeline] { 00:00:07.838 [Pipeline] stage 00:00:07.840 [Pipeline] { (Prologue) 00:00:08.077 [Pipeline] sh 00:00:08.360 + logger -p user.info -t JENKINS-CI 00:00:08.373 [Pipeline] echo 00:00:08.374 Node: WFP6 00:00:08.381 [Pipeline] sh 00:00:08.678 [Pipeline] setCustomBuildProperty 00:00:08.690 [Pipeline] echo 00:00:08.691 Cleanup processes 00:00:08.696 [Pipeline] sh 00:00:08.979 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.979 140921 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.993 [Pipeline] sh 00:00:09.278 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.278 ++ grep -v 'sudo pgrep' 00:00:09.278 ++ awk '{print $1}' 00:00:09.278 + sudo kill -9 00:00:09.279 + true 00:00:09.294 [Pipeline] cleanWs 00:00:09.306 [WS-CLEANUP] Deleting project workspace... 00:00:09.306 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.313 [WS-CLEANUP] done 00:00:09.318 [Pipeline] setCustomBuildProperty 00:00:09.335 [Pipeline] sh 00:00:09.618 + sudo git config --global --replace-all safe.directory '*' 00:00:09.715 [Pipeline] httpRequest 00:00:10.081 [Pipeline] echo 00:00:10.083 Sorcerer 10.211.164.101 is alive 00:00:10.094 [Pipeline] retry 00:00:10.097 [Pipeline] { 00:00:10.111 [Pipeline] httpRequest 00:00:10.116 HttpMethod: GET 00:00:10.116 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:10.117 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:10.144 Response Code: HTTP/1.1 200 OK 00:00:10.144 Success: Status code 200 is in the accepted range: 200,404 00:00:10.145 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:31.465 [Pipeline] } 00:00:31.481 [Pipeline] // retry 00:00:31.489 [Pipeline] sh 00:00:31.773 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:31.789 [Pipeline] httpRequest 00:00:32.153 [Pipeline] echo 00:00:32.155 Sorcerer 10.211.164.101 is alive 00:00:32.164 [Pipeline] retry 00:00:32.166 [Pipeline] { 00:00:32.180 [Pipeline] httpRequest 00:00:32.184 HttpMethod: GET 00:00:32.185 URL: http://10.211.164.101/packages/spdk_ba5b39cb298361a205f1275f98050707c51df86c.tar.gz 00:00:32.186 Sending request to url: http://10.211.164.101/packages/spdk_ba5b39cb298361a205f1275f98050707c51df86c.tar.gz 00:00:32.192 Response Code: HTTP/1.1 200 OK 00:00:32.192 Success: Status code 200 is in the accepted range: 200,404 00:00:32.193 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_ba5b39cb298361a205f1275f98050707c51df86c.tar.gz 00:01:30.741 [Pipeline] } 00:01:30.758 [Pipeline] // retry 00:01:30.766 [Pipeline] sh 00:01:31.051 + tar --no-same-owner -xf spdk_ba5b39cb298361a205f1275f98050707c51df86c.tar.gz 00:01:33.600 [Pipeline] sh 00:01:33.885 + git -C spdk log --oneline -n5 00:01:33.885 ba5b39cb2 thread: Extended options for spdk_interrupt_register 00:01:33.885 52e9db722 util: allow a fd_group to manage all its fds 00:01:33.885 6082eddb0 util: fix total fds to wait for 00:01:33.885 8ce2f3c7d util: handle events for vfio fd type 00:01:33.885 381b6895f util: Extended options for spdk_fd_group_add 00:01:33.895 [Pipeline] } 00:01:33.909 [Pipeline] // stage 00:01:33.917 [Pipeline] stage 00:01:33.920 [Pipeline] { (Prepare) 00:01:33.937 [Pipeline] writeFile 00:01:33.954 [Pipeline] sh 00:01:34.237 + logger -p user.info -t JENKINS-CI 00:01:34.250 [Pipeline] sh 00:01:34.534 + logger -p user.info -t JENKINS-CI 00:01:34.546 [Pipeline] sh 00:01:34.831 + cat autorun-spdk.conf 00:01:34.831 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.831 SPDK_TEST_NVMF=1 00:01:34.831 SPDK_TEST_NVME_CLI=1 00:01:34.831 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:34.831 SPDK_TEST_NVMF_NICS=e810 00:01:34.831 SPDK_TEST_VFIOUSER=1 00:01:34.831 SPDK_RUN_UBSAN=1 00:01:34.831 NET_TYPE=phy 00:01:34.838 RUN_NIGHTLY=0 00:01:34.843 [Pipeline] readFile 00:01:34.870 [Pipeline] withEnv 00:01:34.872 [Pipeline] { 00:01:34.884 [Pipeline] sh 00:01:35.170 + set -ex 00:01:35.170 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:35.170 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:35.170 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.170 ++ SPDK_TEST_NVMF=1 00:01:35.170 ++ SPDK_TEST_NVME_CLI=1 00:01:35.170 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.170 ++ SPDK_TEST_NVMF_NICS=e810 00:01:35.170 ++ SPDK_TEST_VFIOUSER=1 00:01:35.170 ++ SPDK_RUN_UBSAN=1 00:01:35.170 ++ NET_TYPE=phy 00:01:35.170 ++ RUN_NIGHTLY=0 00:01:35.170 + case $SPDK_TEST_NVMF_NICS in 00:01:35.170 + DRIVERS=ice 00:01:35.170 + [[ tcp == \r\d\m\a ]] 00:01:35.170 + [[ -n ice ]] 00:01:35.170 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:35.170 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:35.170 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:35.170 rmmod: ERROR: Module irdma is not currently loaded 00:01:35.170 rmmod: ERROR: Module i40iw is not currently loaded 00:01:35.170 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:35.170 + true 00:01:35.170 + for D in $DRIVERS 00:01:35.170 + sudo modprobe ice 00:01:35.170 + exit 0 00:01:35.179 [Pipeline] } 00:01:35.192 [Pipeline] // withEnv 00:01:35.197 [Pipeline] } 00:01:35.211 [Pipeline] // stage 00:01:35.221 [Pipeline] catchError 00:01:35.223 [Pipeline] { 00:01:35.238 [Pipeline] timeout 00:01:35.238 Timeout set to expire in 1 hr 0 min 00:01:35.239 [Pipeline] { 00:01:35.253 [Pipeline] stage 00:01:35.255 [Pipeline] { (Tests) 00:01:35.268 [Pipeline] sh 00:01:35.553 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.554 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.554 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.554 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:35.554 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:35.554 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:35.554 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:35.554 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:35.554 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:35.554 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:35.554 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:35.554 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.554 + source /etc/os-release 00:01:35.554 ++ NAME='Fedora Linux' 00:01:35.554 ++ VERSION='39 (Cloud Edition)' 00:01:35.554 ++ ID=fedora 00:01:35.554 ++ VERSION_ID=39 00:01:35.554 ++ VERSION_CODENAME= 00:01:35.554 ++ PLATFORM_ID=platform:f39 00:01:35.554 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:35.554 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:35.554 ++ LOGO=fedora-logo-icon 00:01:35.554 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:35.554 ++ HOME_URL=https://fedoraproject.org/ 00:01:35.554 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:35.554 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:35.554 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:35.554 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:35.554 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:35.554 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:35.554 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:35.554 ++ SUPPORT_END=2024-11-12 00:01:35.554 ++ VARIANT='Cloud Edition' 00:01:35.554 ++ VARIANT_ID=cloud 00:01:35.554 + uname -a 00:01:35.554 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:35.554 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:38.156 Hugepages 00:01:38.156 node hugesize free / total 00:01:38.156 node0 1048576kB 0 / 0 00:01:38.156 node0 2048kB 0 / 0 00:01:38.156 node1 1048576kB 0 / 0 00:01:38.156 node1 2048kB 0 / 0 00:01:38.156 00:01:38.156 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:38.156 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:38.156 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:38.156 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:38.156 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:38.156 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:38.156 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:38.156 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:38.156 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:38.156 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:38.156 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:38.156 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:38.156 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:38.156 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:38.156 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:38.156 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:38.156 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:38.156 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:38.156 + rm -f /tmp/spdk-ld-path 00:01:38.156 + source autorun-spdk.conf 00:01:38.156 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.156 ++ SPDK_TEST_NVMF=1 00:01:38.156 ++ SPDK_TEST_NVME_CLI=1 00:01:38.156 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:38.156 ++ SPDK_TEST_NVMF_NICS=e810 00:01:38.156 ++ SPDK_TEST_VFIOUSER=1 00:01:38.156 ++ SPDK_RUN_UBSAN=1 00:01:38.156 ++ NET_TYPE=phy 00:01:38.156 ++ RUN_NIGHTLY=0 00:01:38.156 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:38.156 + [[ -n '' ]] 00:01:38.156 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:38.156 + for M in /var/spdk/build-*-manifest.txt 00:01:38.156 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:38.156 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:38.156 + for M in /var/spdk/build-*-manifest.txt 00:01:38.156 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:38.156 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:38.156 + for M in /var/spdk/build-*-manifest.txt 00:01:38.156 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:38.156 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:38.156 ++ uname 00:01:38.156 + [[ Linux == \L\i\n\u\x ]] 00:01:38.156 + sudo dmesg -T 00:01:38.156 + sudo dmesg --clear 00:01:38.156 + dmesg_pid=142358 00:01:38.156 + [[ Fedora Linux == FreeBSD ]] 00:01:38.156 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.156 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.156 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:38.156 + [[ -x /usr/src/fio-static/fio ]] 00:01:38.156 + export FIO_BIN=/usr/src/fio-static/fio 00:01:38.156 + FIO_BIN=/usr/src/fio-static/fio 00:01:38.156 + sudo dmesg -Tw 00:01:38.156 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:38.156 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:38.157 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:38.157 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.157 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.157 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:38.157 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.157 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.157 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:38.417 Test configuration: 00:01:38.417 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.417 SPDK_TEST_NVMF=1 00:01:38.417 SPDK_TEST_NVME_CLI=1 00:01:38.417 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:38.417 SPDK_TEST_NVMF_NICS=e810 00:01:38.417 SPDK_TEST_VFIOUSER=1 00:01:38.417 SPDK_RUN_UBSAN=1 00:01:38.417 NET_TYPE=phy 00:01:38.417 RUN_NIGHTLY=0 18:09:31 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:38.417 18:09:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:38.417 18:09:31 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:38.417 18:09:31 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:38.417 18:09:31 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:38.417 18:09:31 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:38.417 18:09:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.417 18:09:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.417 18:09:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.417 18:09:31 -- paths/export.sh@5 -- $ export PATH 00:01:38.417 18:09:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.417 18:09:31 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:38.417 18:09:31 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:38.417 18:09:31 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728403771.XXXXXX 00:01:38.417 18:09:31 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728403771.pBueAP 00:01:38.417 18:09:31 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:38.417 18:09:31 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:38.417 18:09:31 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:38.417 18:09:31 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:38.417 18:09:31 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:38.417 18:09:31 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:38.417 18:09:31 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:38.417 18:09:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.417 18:09:31 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:38.417 18:09:31 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:38.417 18:09:31 -- pm/common@17 -- $ local monitor 00:01:38.417 18:09:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.417 18:09:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.417 18:09:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.417 18:09:31 -- pm/common@21 -- $ date +%s 00:01:38.417 18:09:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.417 18:09:31 -- pm/common@21 -- $ date +%s 00:01:38.417 18:09:31 -- pm/common@25 -- $ sleep 1 00:01:38.417 18:09:31 -- pm/common@21 -- $ date +%s 00:01:38.417 18:09:31 -- pm/common@21 -- $ date +%s 00:01:38.417 18:09:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728403771 00:01:38.417 18:09:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728403771 00:01:38.417 18:09:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728403771 00:01:38.417 18:09:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728403771 00:01:38.417 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728403771_collect-cpu-load.pm.log 00:01:38.417 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728403771_collect-vmstat.pm.log 00:01:38.417 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728403771_collect-cpu-temp.pm.log 00:01:38.417 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728403771_collect-bmc-pm.bmc.pm.log 00:01:39.355 18:09:32 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:39.355 18:09:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:39.355 18:09:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:39.355 18:09:32 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:39.355 18:09:32 -- spdk/autobuild.sh@16 -- $ date -u 00:01:39.355 Tue Oct 8 04:09:32 PM UTC 2024 00:01:39.355 18:09:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:39.355 v25.01-pre-51-gba5b39cb2 00:01:39.355 18:09:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:39.355 18:09:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:39.355 18:09:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:39.355 18:09:32 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:39.355 18:09:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:39.355 18:09:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.355 ************************************ 00:01:39.355 START TEST ubsan 00:01:39.355 ************************************ 00:01:39.355 18:09:32 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:39.355 using ubsan 00:01:39.355 00:01:39.355 real 0m0.000s 00:01:39.355 user 0m0.000s 00:01:39.355 sys 0m0.000s 00:01:39.355 18:09:32 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:39.355 18:09:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:39.355 ************************************ 00:01:39.355 END TEST ubsan 00:01:39.355 ************************************ 00:01:39.615 18:09:32 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:39.615 18:09:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:39.615 18:09:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:39.615 18:09:32 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:39.615 18:09:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:39.615 18:09:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:39.615 18:09:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:39.615 18:09:32 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:39.615 18:09:32 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:39.615 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:39.615 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:39.873 Using 'verbs' RDMA provider 00:01:53.022 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:05.235 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:05.235 Creating mk/config.mk...done. 00:02:05.235 Creating mk/cc.flags.mk...done. 00:02:05.235 Type 'make' to build. 00:02:05.235 18:09:58 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:05.235 18:09:58 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:05.235 18:09:58 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:05.235 18:09:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.235 ************************************ 00:02:05.235 START TEST make 00:02:05.235 ************************************ 00:02:05.235 18:09:58 make -- common/autotest_common.sh@1125 -- $ make -j96 00:02:05.235 make[1]: Nothing to be done for 'all'. 00:02:06.617 The Meson build system 00:02:06.617 Version: 1.5.0 00:02:06.617 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:06.617 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:06.617 Build type: native build 00:02:06.618 Project name: libvfio-user 00:02:06.618 Project version: 0.0.1 00:02:06.618 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:06.618 C linker for the host machine: cc ld.bfd 2.40-14 00:02:06.618 Host machine cpu family: x86_64 00:02:06.618 Host machine cpu: x86_64 00:02:06.618 Run-time dependency threads found: YES 00:02:06.618 Library dl found: YES 00:02:06.618 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:06.618 Run-time dependency json-c found: YES 0.17 00:02:06.618 Run-time dependency cmocka found: YES 1.1.7 00:02:06.618 Program pytest-3 found: NO 00:02:06.618 Program flake8 found: NO 00:02:06.618 Program misspell-fixer found: NO 00:02:06.618 Program restructuredtext-lint found: NO 00:02:06.618 Program valgrind found: YES (/usr/bin/valgrind) 00:02:06.618 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:06.618 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:06.618 Compiler for C supports arguments -Wwrite-strings: YES 00:02:06.618 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:06.618 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:06.618 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:06.618 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:06.618 Build targets in project: 8 00:02:06.618 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:06.618 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:06.618 00:02:06.618 libvfio-user 0.0.1 00:02:06.618 00:02:06.618 User defined options 00:02:06.618 buildtype : debug 00:02:06.618 default_library: shared 00:02:06.618 libdir : /usr/local/lib 00:02:06.618 00:02:06.618 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:07.183 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:07.441 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:07.441 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:07.441 [3/37] Compiling C object samples/null.p/null.c.o 00:02:07.441 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:07.441 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:07.441 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:07.441 [7/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:07.441 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:07.441 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:07.441 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:07.441 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:07.441 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:07.441 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:07.441 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:07.441 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:07.441 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:07.441 [17/37] Compiling C object samples/server.p/server.c.o 00:02:07.441 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:07.441 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:07.442 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:07.442 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:07.442 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:07.442 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:07.442 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:07.442 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:07.442 [26/37] Compiling C object samples/client.p/client.c.o 00:02:07.442 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:07.442 [28/37] Linking target samples/client 00:02:07.442 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:07.442 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:07.442 [31/37] Linking target test/unit_tests 00:02:07.700 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:07.700 [33/37] Linking target samples/null 00:02:07.700 [34/37] Linking target samples/lspci 00:02:07.700 [35/37] Linking target samples/server 00:02:07.700 [36/37] Linking target samples/gpio-pci-idio-16 00:02:07.700 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:07.700 INFO: autodetecting backend as ninja 00:02:07.700 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:07.700 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:07.958 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:07.959 ninja: no work to do. 00:02:13.228 The Meson build system 00:02:13.228 Version: 1.5.0 00:02:13.228 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:13.228 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:13.228 Build type: native build 00:02:13.228 Program cat found: YES (/usr/bin/cat) 00:02:13.228 Project name: DPDK 00:02:13.228 Project version: 24.03.0 00:02:13.228 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:13.228 C linker for the host machine: cc ld.bfd 2.40-14 00:02:13.228 Host machine cpu family: x86_64 00:02:13.228 Host machine cpu: x86_64 00:02:13.228 Message: ## Building in Developer Mode ## 00:02:13.228 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:13.228 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:13.228 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:13.228 Program python3 found: YES (/usr/bin/python3) 00:02:13.228 Program cat found: YES (/usr/bin/cat) 00:02:13.228 Compiler for C supports arguments -march=native: YES 00:02:13.228 Checking for size of "void *" : 8 00:02:13.228 Checking for size of "void *" : 8 (cached) 00:02:13.228 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:13.228 Library m found: YES 00:02:13.228 Library numa found: YES 00:02:13.228 Has header "numaif.h" : YES 00:02:13.228 Library fdt found: NO 00:02:13.228 Library execinfo found: NO 00:02:13.228 Has header "execinfo.h" : YES 00:02:13.228 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:13.228 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:13.228 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:13.228 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:13.228 Run-time dependency openssl found: YES 3.1.1 00:02:13.228 Run-time dependency libpcap found: YES 1.10.4 00:02:13.228 Has header "pcap.h" with dependency libpcap: YES 00:02:13.228 Compiler for C supports arguments -Wcast-qual: YES 00:02:13.228 Compiler for C supports arguments -Wdeprecated: YES 00:02:13.228 Compiler for C supports arguments -Wformat: YES 00:02:13.228 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:13.228 Compiler for C supports arguments -Wformat-security: NO 00:02:13.228 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:13.228 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:13.228 Compiler for C supports arguments -Wnested-externs: YES 00:02:13.228 Compiler for C supports arguments -Wold-style-definition: YES 00:02:13.228 Compiler for C supports arguments -Wpointer-arith: YES 00:02:13.228 Compiler for C supports arguments -Wsign-compare: YES 00:02:13.228 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:13.228 Compiler for C supports arguments -Wundef: YES 00:02:13.228 Compiler for C supports arguments -Wwrite-strings: YES 00:02:13.228 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:13.228 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:13.228 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:13.228 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:13.228 Program objdump found: YES (/usr/bin/objdump) 00:02:13.229 Compiler for C supports arguments -mavx512f: YES 00:02:13.229 Checking if "AVX512 checking" compiles: YES 00:02:13.229 Fetching value of define "__SSE4_2__" : 1 00:02:13.229 Fetching value of define "__AES__" : 1 00:02:13.229 Fetching value of define "__AVX__" : 1 00:02:13.229 Fetching value of define "__AVX2__" : 1 00:02:13.229 Fetching value of define "__AVX512BW__" : 1 00:02:13.229 Fetching value of define "__AVX512CD__" : 1 00:02:13.229 Fetching value of define "__AVX512DQ__" : 1 00:02:13.229 Fetching value of define "__AVX512F__" : 1 00:02:13.229 Fetching value of define "__AVX512VL__" : 1 00:02:13.229 Fetching value of define "__PCLMUL__" : 1 00:02:13.229 Fetching value of define "__RDRND__" : 1 00:02:13.229 Fetching value of define "__RDSEED__" : 1 00:02:13.229 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:13.229 Fetching value of define "__znver1__" : (undefined) 00:02:13.229 Fetching value of define "__znver2__" : (undefined) 00:02:13.229 Fetching value of define "__znver3__" : (undefined) 00:02:13.229 Fetching value of define "__znver4__" : (undefined) 00:02:13.229 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:13.229 Message: lib/log: Defining dependency "log" 00:02:13.229 Message: lib/kvargs: Defining dependency "kvargs" 00:02:13.229 Message: lib/telemetry: Defining dependency "telemetry" 00:02:13.229 Checking for function "getentropy" : NO 00:02:13.229 Message: lib/eal: Defining dependency "eal" 00:02:13.229 Message: lib/ring: Defining dependency "ring" 00:02:13.229 Message: lib/rcu: Defining dependency "rcu" 00:02:13.229 Message: lib/mempool: Defining dependency "mempool" 00:02:13.229 Message: lib/mbuf: Defining dependency "mbuf" 00:02:13.229 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:13.229 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.229 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:13.229 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:13.229 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:13.229 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:13.229 Compiler for C supports arguments -mpclmul: YES 00:02:13.229 Compiler for C supports arguments -maes: YES 00:02:13.229 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:13.229 Compiler for C supports arguments -mavx512bw: YES 00:02:13.229 Compiler for C supports arguments -mavx512dq: YES 00:02:13.229 Compiler for C supports arguments -mavx512vl: YES 00:02:13.229 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:13.229 Compiler for C supports arguments -mavx2: YES 00:02:13.229 Compiler for C supports arguments -mavx: YES 00:02:13.229 Message: lib/net: Defining dependency "net" 00:02:13.229 Message: lib/meter: Defining dependency "meter" 00:02:13.229 Message: lib/ethdev: Defining dependency "ethdev" 00:02:13.229 Message: lib/pci: Defining dependency "pci" 00:02:13.229 Message: lib/cmdline: Defining dependency "cmdline" 00:02:13.229 Message: lib/hash: Defining dependency "hash" 00:02:13.229 Message: lib/timer: Defining dependency "timer" 00:02:13.229 Message: lib/compressdev: Defining dependency "compressdev" 00:02:13.229 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:13.229 Message: lib/dmadev: Defining dependency "dmadev" 00:02:13.229 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:13.229 Message: lib/power: Defining dependency "power" 00:02:13.229 Message: lib/reorder: Defining dependency "reorder" 00:02:13.229 Message: lib/security: Defining dependency "security" 00:02:13.229 Has header "linux/userfaultfd.h" : YES 00:02:13.229 Has header "linux/vduse.h" : YES 00:02:13.229 Message: lib/vhost: Defining dependency "vhost" 00:02:13.229 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:13.229 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:13.229 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:13.229 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:13.229 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:13.229 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:13.229 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:13.229 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:13.229 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:13.229 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:13.229 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:13.229 Configuring doxy-api-html.conf using configuration 00:02:13.229 Configuring doxy-api-man.conf using configuration 00:02:13.229 Program mandb found: YES (/usr/bin/mandb) 00:02:13.229 Program sphinx-build found: NO 00:02:13.229 Configuring rte_build_config.h using configuration 00:02:13.229 Message: 00:02:13.229 ================= 00:02:13.229 Applications Enabled 00:02:13.229 ================= 00:02:13.229 00:02:13.229 apps: 00:02:13.229 00:02:13.229 00:02:13.229 Message: 00:02:13.229 ================= 00:02:13.229 Libraries Enabled 00:02:13.229 ================= 00:02:13.229 00:02:13.229 libs: 00:02:13.229 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:13.229 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:13.229 cryptodev, dmadev, power, reorder, security, vhost, 00:02:13.229 00:02:13.229 Message: 00:02:13.229 =============== 00:02:13.229 Drivers Enabled 00:02:13.229 =============== 00:02:13.229 00:02:13.229 common: 00:02:13.229 00:02:13.229 bus: 00:02:13.229 pci, vdev, 00:02:13.229 mempool: 00:02:13.229 ring, 00:02:13.229 dma: 00:02:13.229 00:02:13.229 net: 00:02:13.229 00:02:13.229 crypto: 00:02:13.229 00:02:13.229 compress: 00:02:13.229 00:02:13.229 vdpa: 00:02:13.229 00:02:13.229 00:02:13.229 Message: 00:02:13.229 ================= 00:02:13.229 Content Skipped 00:02:13.229 ================= 00:02:13.229 00:02:13.229 apps: 00:02:13.229 dumpcap: explicitly disabled via build config 00:02:13.229 graph: explicitly disabled via build config 00:02:13.229 pdump: explicitly disabled via build config 00:02:13.229 proc-info: explicitly disabled via build config 00:02:13.229 test-acl: explicitly disabled via build config 00:02:13.229 test-bbdev: explicitly disabled via build config 00:02:13.229 test-cmdline: explicitly disabled via build config 00:02:13.229 test-compress-perf: explicitly disabled via build config 00:02:13.229 test-crypto-perf: explicitly disabled via build config 00:02:13.229 test-dma-perf: explicitly disabled via build config 00:02:13.229 test-eventdev: explicitly disabled via build config 00:02:13.229 test-fib: explicitly disabled via build config 00:02:13.229 test-flow-perf: explicitly disabled via build config 00:02:13.229 test-gpudev: explicitly disabled via build config 00:02:13.229 test-mldev: explicitly disabled via build config 00:02:13.229 test-pipeline: explicitly disabled via build config 00:02:13.229 test-pmd: explicitly disabled via build config 00:02:13.229 test-regex: explicitly disabled via build config 00:02:13.229 test-sad: explicitly disabled via build config 00:02:13.229 test-security-perf: explicitly disabled via build config 00:02:13.229 00:02:13.229 libs: 00:02:13.229 argparse: explicitly disabled via build config 00:02:13.229 metrics: explicitly disabled via build config 00:02:13.229 acl: explicitly disabled via build config 00:02:13.229 bbdev: explicitly disabled via build config 00:02:13.229 bitratestats: explicitly disabled via build config 00:02:13.229 bpf: explicitly disabled via build config 00:02:13.229 cfgfile: explicitly disabled via build config 00:02:13.229 distributor: explicitly disabled via build config 00:02:13.229 efd: explicitly disabled via build config 00:02:13.229 eventdev: explicitly disabled via build config 00:02:13.229 dispatcher: explicitly disabled via build config 00:02:13.229 gpudev: explicitly disabled via build config 00:02:13.229 gro: explicitly disabled via build config 00:02:13.229 gso: explicitly disabled via build config 00:02:13.229 ip_frag: explicitly disabled via build config 00:02:13.229 jobstats: explicitly disabled via build config 00:02:13.229 latencystats: explicitly disabled via build config 00:02:13.229 lpm: explicitly disabled via build config 00:02:13.229 member: explicitly disabled via build config 00:02:13.229 pcapng: explicitly disabled via build config 00:02:13.229 rawdev: explicitly disabled via build config 00:02:13.229 regexdev: explicitly disabled via build config 00:02:13.229 mldev: explicitly disabled via build config 00:02:13.229 rib: explicitly disabled via build config 00:02:13.229 sched: explicitly disabled via build config 00:02:13.229 stack: explicitly disabled via build config 00:02:13.229 ipsec: explicitly disabled via build config 00:02:13.229 pdcp: explicitly disabled via build config 00:02:13.229 fib: explicitly disabled via build config 00:02:13.229 port: explicitly disabled via build config 00:02:13.229 pdump: explicitly disabled via build config 00:02:13.229 table: explicitly disabled via build config 00:02:13.229 pipeline: explicitly disabled via build config 00:02:13.229 graph: explicitly disabled via build config 00:02:13.229 node: explicitly disabled via build config 00:02:13.229 00:02:13.229 drivers: 00:02:13.229 common/cpt: not in enabled drivers build config 00:02:13.229 common/dpaax: not in enabled drivers build config 00:02:13.229 common/iavf: not in enabled drivers build config 00:02:13.229 common/idpf: not in enabled drivers build config 00:02:13.229 common/ionic: not in enabled drivers build config 00:02:13.229 common/mvep: not in enabled drivers build config 00:02:13.229 common/octeontx: not in enabled drivers build config 00:02:13.229 bus/auxiliary: not in enabled drivers build config 00:02:13.229 bus/cdx: not in enabled drivers build config 00:02:13.229 bus/dpaa: not in enabled drivers build config 00:02:13.229 bus/fslmc: not in enabled drivers build config 00:02:13.229 bus/ifpga: not in enabled drivers build config 00:02:13.229 bus/platform: not in enabled drivers build config 00:02:13.229 bus/uacce: not in enabled drivers build config 00:02:13.229 bus/vmbus: not in enabled drivers build config 00:02:13.229 common/cnxk: not in enabled drivers build config 00:02:13.229 common/mlx5: not in enabled drivers build config 00:02:13.229 common/nfp: not in enabled drivers build config 00:02:13.229 common/nitrox: not in enabled drivers build config 00:02:13.229 common/qat: not in enabled drivers build config 00:02:13.229 common/sfc_efx: not in enabled drivers build config 00:02:13.229 mempool/bucket: not in enabled drivers build config 00:02:13.229 mempool/cnxk: not in enabled drivers build config 00:02:13.229 mempool/dpaa: not in enabled drivers build config 00:02:13.229 mempool/dpaa2: not in enabled drivers build config 00:02:13.229 mempool/octeontx: not in enabled drivers build config 00:02:13.229 mempool/stack: not in enabled drivers build config 00:02:13.229 dma/cnxk: not in enabled drivers build config 00:02:13.229 dma/dpaa: not in enabled drivers build config 00:02:13.230 dma/dpaa2: not in enabled drivers build config 00:02:13.230 dma/hisilicon: not in enabled drivers build config 00:02:13.230 dma/idxd: not in enabled drivers build config 00:02:13.230 dma/ioat: not in enabled drivers build config 00:02:13.230 dma/skeleton: not in enabled drivers build config 00:02:13.230 net/af_packet: not in enabled drivers build config 00:02:13.230 net/af_xdp: not in enabled drivers build config 00:02:13.230 net/ark: not in enabled drivers build config 00:02:13.230 net/atlantic: not in enabled drivers build config 00:02:13.230 net/avp: not in enabled drivers build config 00:02:13.230 net/axgbe: not in enabled drivers build config 00:02:13.230 net/bnx2x: not in enabled drivers build config 00:02:13.230 net/bnxt: not in enabled drivers build config 00:02:13.230 net/bonding: not in enabled drivers build config 00:02:13.230 net/cnxk: not in enabled drivers build config 00:02:13.230 net/cpfl: not in enabled drivers build config 00:02:13.230 net/cxgbe: not in enabled drivers build config 00:02:13.230 net/dpaa: not in enabled drivers build config 00:02:13.230 net/dpaa2: not in enabled drivers build config 00:02:13.230 net/e1000: not in enabled drivers build config 00:02:13.230 net/ena: not in enabled drivers build config 00:02:13.230 net/enetc: not in enabled drivers build config 00:02:13.230 net/enetfec: not in enabled drivers build config 00:02:13.230 net/enic: not in enabled drivers build config 00:02:13.230 net/failsafe: not in enabled drivers build config 00:02:13.230 net/fm10k: not in enabled drivers build config 00:02:13.230 net/gve: not in enabled drivers build config 00:02:13.230 net/hinic: not in enabled drivers build config 00:02:13.230 net/hns3: not in enabled drivers build config 00:02:13.230 net/i40e: not in enabled drivers build config 00:02:13.230 net/iavf: not in enabled drivers build config 00:02:13.230 net/ice: not in enabled drivers build config 00:02:13.230 net/idpf: not in enabled drivers build config 00:02:13.230 net/igc: not in enabled drivers build config 00:02:13.230 net/ionic: not in enabled drivers build config 00:02:13.230 net/ipn3ke: not in enabled drivers build config 00:02:13.230 net/ixgbe: not in enabled drivers build config 00:02:13.230 net/mana: not in enabled drivers build config 00:02:13.230 net/memif: not in enabled drivers build config 00:02:13.230 net/mlx4: not in enabled drivers build config 00:02:13.230 net/mlx5: not in enabled drivers build config 00:02:13.230 net/mvneta: not in enabled drivers build config 00:02:13.230 net/mvpp2: not in enabled drivers build config 00:02:13.230 net/netvsc: not in enabled drivers build config 00:02:13.230 net/nfb: not in enabled drivers build config 00:02:13.230 net/nfp: not in enabled drivers build config 00:02:13.230 net/ngbe: not in enabled drivers build config 00:02:13.230 net/null: not in enabled drivers build config 00:02:13.230 net/octeontx: not in enabled drivers build config 00:02:13.230 net/octeon_ep: not in enabled drivers build config 00:02:13.230 net/pcap: not in enabled drivers build config 00:02:13.230 net/pfe: not in enabled drivers build config 00:02:13.230 net/qede: not in enabled drivers build config 00:02:13.230 net/ring: not in enabled drivers build config 00:02:13.230 net/sfc: not in enabled drivers build config 00:02:13.230 net/softnic: not in enabled drivers build config 00:02:13.230 net/tap: not in enabled drivers build config 00:02:13.230 net/thunderx: not in enabled drivers build config 00:02:13.230 net/txgbe: not in enabled drivers build config 00:02:13.230 net/vdev_netvsc: not in enabled drivers build config 00:02:13.230 net/vhost: not in enabled drivers build config 00:02:13.230 net/virtio: not in enabled drivers build config 00:02:13.230 net/vmxnet3: not in enabled drivers build config 00:02:13.230 raw/*: missing internal dependency, "rawdev" 00:02:13.230 crypto/armv8: not in enabled drivers build config 00:02:13.230 crypto/bcmfs: not in enabled drivers build config 00:02:13.230 crypto/caam_jr: not in enabled drivers build config 00:02:13.230 crypto/ccp: not in enabled drivers build config 00:02:13.230 crypto/cnxk: not in enabled drivers build config 00:02:13.230 crypto/dpaa_sec: not in enabled drivers build config 00:02:13.230 crypto/dpaa2_sec: not in enabled drivers build config 00:02:13.230 crypto/ipsec_mb: not in enabled drivers build config 00:02:13.230 crypto/mlx5: not in enabled drivers build config 00:02:13.230 crypto/mvsam: not in enabled drivers build config 00:02:13.230 crypto/nitrox: not in enabled drivers build config 00:02:13.230 crypto/null: not in enabled drivers build config 00:02:13.230 crypto/octeontx: not in enabled drivers build config 00:02:13.230 crypto/openssl: not in enabled drivers build config 00:02:13.230 crypto/scheduler: not in enabled drivers build config 00:02:13.230 crypto/uadk: not in enabled drivers build config 00:02:13.230 crypto/virtio: not in enabled drivers build config 00:02:13.230 compress/isal: not in enabled drivers build config 00:02:13.230 compress/mlx5: not in enabled drivers build config 00:02:13.230 compress/nitrox: not in enabled drivers build config 00:02:13.230 compress/octeontx: not in enabled drivers build config 00:02:13.230 compress/zlib: not in enabled drivers build config 00:02:13.230 regex/*: missing internal dependency, "regexdev" 00:02:13.230 ml/*: missing internal dependency, "mldev" 00:02:13.230 vdpa/ifc: not in enabled drivers build config 00:02:13.230 vdpa/mlx5: not in enabled drivers build config 00:02:13.230 vdpa/nfp: not in enabled drivers build config 00:02:13.230 vdpa/sfc: not in enabled drivers build config 00:02:13.230 event/*: missing internal dependency, "eventdev" 00:02:13.230 baseband/*: missing internal dependency, "bbdev" 00:02:13.230 gpu/*: missing internal dependency, "gpudev" 00:02:13.230 00:02:13.230 00:02:13.489 Build targets in project: 85 00:02:13.489 00:02:13.489 DPDK 24.03.0 00:02:13.489 00:02:13.489 User defined options 00:02:13.489 buildtype : debug 00:02:13.489 default_library : shared 00:02:13.489 libdir : lib 00:02:13.489 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:13.489 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:13.489 c_link_args : 00:02:13.489 cpu_instruction_set: native 00:02:13.489 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:13.489 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:13.489 enable_docs : false 00:02:13.489 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:13.489 enable_kmods : false 00:02:13.489 max_lcores : 128 00:02:13.489 tests : false 00:02:13.489 00:02:13.489 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:14.061 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:14.061 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:14.061 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:14.061 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:14.061 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:14.061 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:14.061 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:14.061 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:14.061 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:14.061 [9/268] Linking static target lib/librte_kvargs.a 00:02:14.061 [10/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:14.061 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:14.061 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:14.061 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:14.324 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:14.324 [15/268] Linking static target lib/librte_log.a 00:02:14.325 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:14.325 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:14.325 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:14.325 [19/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:14.325 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:14.325 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:14.325 [22/268] Linking static target lib/librte_pci.a 00:02:14.325 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:14.325 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:14.583 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:14.583 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:14.583 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:14.583 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:14.583 [29/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:14.583 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:14.583 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:14.583 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:14.583 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:14.583 [34/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:14.583 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:14.583 [36/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.583 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:14.583 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:14.583 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:14.583 [40/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:14.583 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:14.583 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:14.583 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:14.583 [44/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:14.583 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:14.583 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:14.583 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:14.583 [48/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:14.583 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:14.583 [50/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:14.583 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:14.583 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:14.583 [53/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:14.583 [54/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:14.583 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:14.583 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:14.583 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:14.583 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:14.584 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:14.584 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:14.584 [61/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:14.584 [62/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:14.842 [63/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:14.842 [64/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:14.842 [65/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:14.842 [66/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:14.843 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:14.843 [68/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:14.843 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:14.843 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:14.843 [71/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:14.843 [72/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:14.843 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:14.843 [74/268] Linking static target lib/librte_meter.a 00:02:14.843 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:14.843 [76/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:14.843 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:14.843 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:14.843 [79/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:14.843 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:14.843 [81/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:14.843 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:14.843 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:14.843 [84/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:14.843 [85/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:14.843 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:14.843 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:14.843 [88/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:14.843 [89/268] Linking static target lib/librte_ring.a 00:02:14.843 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:14.843 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:14.843 [92/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:14.843 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:14.843 [94/268] Linking static target lib/librte_telemetry.a 00:02:14.843 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:14.843 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:14.843 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:14.843 [98/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:14.843 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:14.843 [100/268] Linking static target lib/librte_rcu.a 00:02:14.843 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:14.843 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:14.843 [103/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:14.843 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:14.843 [105/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.843 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:14.843 [107/268] Linking static target lib/librte_net.a 00:02:14.843 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:14.843 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:14.843 [110/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:14.843 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.843 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:14.843 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:14.843 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:14.843 [115/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:14.843 [116/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:14.843 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:14.843 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:14.843 [119/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:14.843 [120/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:14.843 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:14.843 [122/268] Linking static target lib/librte_mempool.a 00:02:14.843 [123/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:14.843 [124/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:14.843 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:14.843 [126/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:14.843 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:14.843 [128/268] Linking static target lib/librte_eal.a 00:02:14.843 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:14.843 [130/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.843 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:15.102 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:15.102 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:15.102 [134/268] Linking static target lib/librte_cmdline.a 00:02:15.102 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:15.102 [136/268] Linking target lib/librte_log.so.24.1 00:02:15.102 [137/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.102 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.102 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:15.102 [140/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:15.102 [141/268] Linking static target lib/librte_mbuf.a 00:02:15.102 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:15.102 [143/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.102 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:15.102 [145/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.102 [146/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:15.102 [147/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:15.102 [148/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.102 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:15.102 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:15.102 [151/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:15.102 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:15.102 [153/268] Linking target lib/librte_kvargs.so.24.1 00:02:15.102 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:15.102 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:15.102 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.102 [157/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:15.102 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:15.102 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:15.102 [160/268] Linking static target lib/librte_timer.a 00:02:15.102 [161/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:15.102 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:15.102 [163/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:15.102 [164/268] Linking static target lib/librte_compressdev.a 00:02:15.102 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:15.360 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:15.360 [167/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:15.360 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:15.360 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:15.360 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:15.360 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:15.360 [172/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.360 [173/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:15.360 [174/268] Linking static target lib/librte_power.a 00:02:15.360 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:15.360 [176/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:15.360 [177/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:15.360 [178/268] Linking target lib/librte_telemetry.so.24.1 00:02:15.360 [179/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:15.360 [180/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:15.360 [181/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:15.360 [182/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:15.360 [183/268] Linking static target lib/librte_dmadev.a 00:02:15.360 [184/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:15.360 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:15.360 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:15.360 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:15.360 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:15.360 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:15.360 [190/268] Linking static target lib/librte_reorder.a 00:02:15.360 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:15.360 [192/268] Linking static target lib/librte_security.a 00:02:15.360 [193/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:15.360 [194/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:15.360 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:15.360 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:15.618 [197/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:15.618 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:15.618 [199/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:15.618 [200/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.618 [201/268] Linking static target lib/librte_hash.a 00:02:15.618 [202/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.618 [203/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.618 [204/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.618 [205/268] Linking static target drivers/librte_mempool_ring.a 00:02:15.618 [206/268] Linking static target drivers/librte_bus_vdev.a 00:02:15.618 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:15.618 [208/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.618 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.618 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.618 [211/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.618 [212/268] Linking static target drivers/librte_bus_pci.a 00:02:15.618 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:15.877 [214/268] Linking static target lib/librte_cryptodev.a 00:02:15.877 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.877 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.877 [217/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.877 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.135 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.135 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.135 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:16.135 [222/268] Linking static target lib/librte_ethdev.a 00:02:16.135 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.135 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.135 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:16.394 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.394 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.330 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:17.330 [229/268] Linking static target lib/librte_vhost.a 00:02:17.589 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.492 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.761 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.761 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.020 [234/268] Linking target lib/librte_eal.so.24.1 00:02:25.020 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:25.020 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:25.020 [237/268] Linking target lib/librte_ring.so.24.1 00:02:25.020 [238/268] Linking target lib/librte_meter.so.24.1 00:02:25.020 [239/268] Linking target lib/librte_timer.so.24.1 00:02:25.020 [240/268] Linking target lib/librte_pci.so.24.1 00:02:25.020 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:25.278 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:25.278 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:25.278 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:25.278 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:25.278 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:25.278 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:25.278 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:25.278 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:25.278 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:25.278 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:25.537 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:25.537 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:25.537 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:25.537 [255/268] Linking target lib/librte_net.so.24.1 00:02:25.537 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:25.537 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:25.537 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:25.795 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:25.795 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:25.795 [261/268] Linking target lib/librte_hash.so.24.1 00:02:25.795 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:25.795 [263/268] Linking target lib/librte_security.so.24.1 00:02:25.795 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:25.795 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:26.054 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:26.054 [267/268] Linking target lib/librte_power.so.24.1 00:02:26.054 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:26.054 INFO: autodetecting backend as ninja 00:02:26.054 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:36.033 CC lib/log/log.o 00:02:36.033 CC lib/log/log_flags.o 00:02:36.033 CC lib/log/log_deprecated.o 00:02:36.033 CC lib/ut_mock/mock.o 00:02:36.033 CC lib/ut/ut.o 00:02:36.033 LIB libspdk_ut_mock.a 00:02:36.033 LIB libspdk_log.a 00:02:36.033 LIB libspdk_ut.a 00:02:36.033 SO libspdk_ut_mock.so.6.0 00:02:36.292 SO libspdk_ut.so.2.0 00:02:36.292 SO libspdk_log.so.7.0 00:02:36.292 SYMLINK libspdk_ut_mock.so 00:02:36.292 SYMLINK libspdk_ut.so 00:02:36.292 SYMLINK libspdk_log.so 00:02:36.551 CC lib/util/base64.o 00:02:36.551 CC lib/dma/dma.o 00:02:36.551 CC lib/util/bit_array.o 00:02:36.551 CC lib/util/cpuset.o 00:02:36.551 CC lib/util/crc16.o 00:02:36.551 CC lib/util/crc32.o 00:02:36.551 CC lib/util/crc32c.o 00:02:36.551 CC lib/ioat/ioat.o 00:02:36.551 CC lib/util/crc32_ieee.o 00:02:36.551 CC lib/util/dif.o 00:02:36.551 CC lib/util/crc64.o 00:02:36.551 CC lib/util/fd.o 00:02:36.551 CC lib/util/fd_group.o 00:02:36.551 CXX lib/trace_parser/trace.o 00:02:36.551 CC lib/util/file.o 00:02:36.551 CC lib/util/hexlify.o 00:02:36.551 CC lib/util/iov.o 00:02:36.551 CC lib/util/math.o 00:02:36.551 CC lib/util/net.o 00:02:36.551 CC lib/util/pipe.o 00:02:36.551 CC lib/util/strerror_tls.o 00:02:36.551 CC lib/util/string.o 00:02:36.551 CC lib/util/uuid.o 00:02:36.551 CC lib/util/xor.o 00:02:36.551 CC lib/util/zipf.o 00:02:36.551 CC lib/util/md5.o 00:02:36.810 CC lib/vfio_user/host/vfio_user_pci.o 00:02:36.810 CC lib/vfio_user/host/vfio_user.o 00:02:36.810 LIB libspdk_dma.a 00:02:36.810 SO libspdk_dma.so.5.0 00:02:36.810 LIB libspdk_ioat.a 00:02:36.810 SYMLINK libspdk_dma.so 00:02:36.810 SO libspdk_ioat.so.7.0 00:02:37.068 SYMLINK libspdk_ioat.so 00:02:37.068 LIB libspdk_vfio_user.a 00:02:37.068 SO libspdk_vfio_user.so.5.0 00:02:37.068 LIB libspdk_util.a 00:02:37.068 SYMLINK libspdk_vfio_user.so 00:02:37.068 SO libspdk_util.so.10.1 00:02:37.068 SYMLINK libspdk_util.so 00:02:37.327 LIB libspdk_trace_parser.a 00:02:37.327 SO libspdk_trace_parser.so.6.0 00:02:37.327 SYMLINK libspdk_trace_parser.so 00:02:37.586 CC lib/json/json_parse.o 00:02:37.586 CC lib/json/json_util.o 00:02:37.586 CC lib/json/json_write.o 00:02:37.586 CC lib/conf/conf.o 00:02:37.586 CC lib/rdma_utils/rdma_utils.o 00:02:37.586 CC lib/vmd/vmd.o 00:02:37.586 CC lib/vmd/led.o 00:02:37.586 CC lib/idxd/idxd.o 00:02:37.586 CC lib/rdma_provider/common.o 00:02:37.586 CC lib/idxd/idxd_user.o 00:02:37.586 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:37.586 CC lib/idxd/idxd_kernel.o 00:02:37.586 CC lib/env_dpdk/env.o 00:02:37.586 CC lib/env_dpdk/memory.o 00:02:37.586 CC lib/env_dpdk/pci.o 00:02:37.586 CC lib/env_dpdk/init.o 00:02:37.586 CC lib/env_dpdk/threads.o 00:02:37.586 CC lib/env_dpdk/pci_ioat.o 00:02:37.586 CC lib/env_dpdk/pci_virtio.o 00:02:37.586 CC lib/env_dpdk/pci_vmd.o 00:02:37.586 CC lib/env_dpdk/pci_idxd.o 00:02:37.586 CC lib/env_dpdk/pci_event.o 00:02:37.586 CC lib/env_dpdk/sigbus_handler.o 00:02:37.586 CC lib/env_dpdk/pci_dpdk.o 00:02:37.586 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:37.586 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:37.586 LIB libspdk_rdma_provider.a 00:02:37.844 LIB libspdk_conf.a 00:02:37.844 SO libspdk_rdma_provider.so.6.0 00:02:37.844 LIB libspdk_json.a 00:02:37.844 SO libspdk_conf.so.6.0 00:02:37.844 LIB libspdk_rdma_utils.a 00:02:37.844 SYMLINK libspdk_rdma_provider.so 00:02:37.844 SO libspdk_json.so.6.0 00:02:37.844 SO libspdk_rdma_utils.so.1.0 00:02:37.844 SYMLINK libspdk_conf.so 00:02:37.844 SYMLINK libspdk_json.so 00:02:37.844 SYMLINK libspdk_rdma_utils.so 00:02:38.101 LIB libspdk_idxd.a 00:02:38.101 LIB libspdk_vmd.a 00:02:38.101 SO libspdk_idxd.so.12.1 00:02:38.101 SO libspdk_vmd.so.6.0 00:02:38.101 SYMLINK libspdk_idxd.so 00:02:38.101 SYMLINK libspdk_vmd.so 00:02:38.101 CC lib/jsonrpc/jsonrpc_server.o 00:02:38.101 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:38.101 CC lib/jsonrpc/jsonrpc_client.o 00:02:38.101 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:38.361 LIB libspdk_jsonrpc.a 00:02:38.361 SO libspdk_jsonrpc.so.6.0 00:02:38.361 SYMLINK libspdk_jsonrpc.so 00:02:38.620 LIB libspdk_env_dpdk.a 00:02:38.620 SO libspdk_env_dpdk.so.15.1 00:02:38.620 SYMLINK libspdk_env_dpdk.so 00:02:38.879 CC lib/rpc/rpc.o 00:02:38.879 LIB libspdk_rpc.a 00:02:38.879 SO libspdk_rpc.so.6.0 00:02:39.138 SYMLINK libspdk_rpc.so 00:02:39.396 CC lib/trace/trace.o 00:02:39.396 CC lib/notify/notify.o 00:02:39.396 CC lib/trace/trace_flags.o 00:02:39.396 CC lib/notify/notify_rpc.o 00:02:39.396 CC lib/trace/trace_rpc.o 00:02:39.396 CC lib/keyring/keyring.o 00:02:39.396 CC lib/keyring/keyring_rpc.o 00:02:39.396 LIB libspdk_notify.a 00:02:39.655 SO libspdk_notify.so.6.0 00:02:39.655 LIB libspdk_keyring.a 00:02:39.655 LIB libspdk_trace.a 00:02:39.655 SO libspdk_keyring.so.2.0 00:02:39.655 SYMLINK libspdk_notify.so 00:02:39.655 SO libspdk_trace.so.11.0 00:02:39.655 SYMLINK libspdk_keyring.so 00:02:39.655 SYMLINK libspdk_trace.so 00:02:39.914 CC lib/thread/thread.o 00:02:39.914 CC lib/thread/iobuf.o 00:02:39.914 CC lib/sock/sock.o 00:02:39.914 CC lib/sock/sock_rpc.o 00:02:40.480 LIB libspdk_sock.a 00:02:40.480 SO libspdk_sock.so.10.0 00:02:40.480 SYMLINK libspdk_sock.so 00:02:40.740 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:40.740 CC lib/nvme/nvme_ctrlr.o 00:02:40.740 CC lib/nvme/nvme_fabric.o 00:02:40.740 CC lib/nvme/nvme_ns_cmd.o 00:02:40.740 CC lib/nvme/nvme_ns.o 00:02:40.740 CC lib/nvme/nvme_pcie_common.o 00:02:40.740 CC lib/nvme/nvme_pcie.o 00:02:40.740 CC lib/nvme/nvme_qpair.o 00:02:40.740 CC lib/nvme/nvme.o 00:02:40.740 CC lib/nvme/nvme_quirks.o 00:02:40.740 CC lib/nvme/nvme_transport.o 00:02:40.740 CC lib/nvme/nvme_discovery.o 00:02:40.740 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:40.740 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:40.740 CC lib/nvme/nvme_tcp.o 00:02:40.740 CC lib/nvme/nvme_opal.o 00:02:40.740 CC lib/nvme/nvme_io_msg.o 00:02:40.740 CC lib/nvme/nvme_poll_group.o 00:02:40.740 CC lib/nvme/nvme_zns.o 00:02:40.740 CC lib/nvme/nvme_stubs.o 00:02:40.740 CC lib/nvme/nvme_auth.o 00:02:40.740 CC lib/nvme/nvme_cuse.o 00:02:40.740 CC lib/nvme/nvme_vfio_user.o 00:02:40.740 CC lib/nvme/nvme_rdma.o 00:02:40.998 LIB libspdk_thread.a 00:02:40.998 SO libspdk_thread.so.10.2 00:02:41.258 SYMLINK libspdk_thread.so 00:02:41.516 CC lib/vfu_tgt/tgt_endpoint.o 00:02:41.516 CC lib/vfu_tgt/tgt_rpc.o 00:02:41.516 CC lib/fsdev/fsdev.o 00:02:41.516 CC lib/fsdev/fsdev_io.o 00:02:41.516 CC lib/fsdev/fsdev_rpc.o 00:02:41.516 CC lib/init/json_config.o 00:02:41.516 CC lib/blob/blobstore.o 00:02:41.516 CC lib/blob/request.o 00:02:41.516 CC lib/init/subsystem.o 00:02:41.516 CC lib/blob/zeroes.o 00:02:41.516 CC lib/init/subsystem_rpc.o 00:02:41.516 CC lib/blob/blob_bs_dev.o 00:02:41.516 CC lib/init/rpc.o 00:02:41.516 CC lib/accel/accel.o 00:02:41.516 CC lib/accel/accel_rpc.o 00:02:41.516 CC lib/accel/accel_sw.o 00:02:41.516 CC lib/virtio/virtio.o 00:02:41.516 CC lib/virtio/virtio_vfio_user.o 00:02:41.516 CC lib/virtio/virtio_vhost_user.o 00:02:41.516 CC lib/virtio/virtio_pci.o 00:02:41.775 LIB libspdk_init.a 00:02:41.775 SO libspdk_init.so.6.0 00:02:41.775 LIB libspdk_vfu_tgt.a 00:02:41.775 LIB libspdk_virtio.a 00:02:41.775 SYMLINK libspdk_init.so 00:02:41.775 SO libspdk_vfu_tgt.so.3.0 00:02:41.775 SO libspdk_virtio.so.7.0 00:02:41.775 SYMLINK libspdk_vfu_tgt.so 00:02:41.775 SYMLINK libspdk_virtio.so 00:02:42.033 LIB libspdk_fsdev.a 00:02:42.033 SO libspdk_fsdev.so.1.0 00:02:42.033 SYMLINK libspdk_fsdev.so 00:02:42.033 CC lib/event/app.o 00:02:42.033 CC lib/event/reactor.o 00:02:42.033 CC lib/event/log_rpc.o 00:02:42.033 CC lib/event/app_rpc.o 00:02:42.033 CC lib/event/scheduler_static.o 00:02:42.292 LIB libspdk_accel.a 00:02:42.292 SO libspdk_accel.so.16.0 00:02:42.292 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:42.292 LIB libspdk_nvme.a 00:02:42.292 SYMLINK libspdk_accel.so 00:02:42.292 LIB libspdk_event.a 00:02:42.550 SO libspdk_event.so.15.0 00:02:42.550 SO libspdk_nvme.so.15.0 00:02:42.550 SYMLINK libspdk_event.so 00:02:42.550 SYMLINK libspdk_nvme.so 00:02:42.809 CC lib/bdev/bdev.o 00:02:42.809 CC lib/bdev/bdev_rpc.o 00:02:42.809 CC lib/bdev/scsi_nvme.o 00:02:42.809 CC lib/bdev/bdev_zone.o 00:02:42.809 CC lib/bdev/part.o 00:02:42.809 LIB libspdk_fuse_dispatcher.a 00:02:42.809 SO libspdk_fuse_dispatcher.so.1.0 00:02:43.068 SYMLINK libspdk_fuse_dispatcher.so 00:02:43.638 LIB libspdk_blob.a 00:02:43.638 SO libspdk_blob.so.11.0 00:02:43.638 SYMLINK libspdk_blob.so 00:02:44.206 CC lib/blobfs/blobfs.o 00:02:44.206 CC lib/blobfs/tree.o 00:02:44.206 CC lib/lvol/lvol.o 00:02:44.466 LIB libspdk_bdev.a 00:02:44.725 SO libspdk_bdev.so.17.0 00:02:44.725 LIB libspdk_blobfs.a 00:02:44.725 SO libspdk_blobfs.so.10.0 00:02:44.725 SYMLINK libspdk_bdev.so 00:02:44.725 LIB libspdk_lvol.a 00:02:44.725 SO libspdk_lvol.so.10.0 00:02:44.725 SYMLINK libspdk_blobfs.so 00:02:44.725 SYMLINK libspdk_lvol.so 00:02:44.984 CC lib/scsi/dev.o 00:02:44.984 CC lib/nvmf/ctrlr.o 00:02:44.984 CC lib/ublk/ublk.o 00:02:44.984 CC lib/nvmf/ctrlr_discovery.o 00:02:44.984 CC lib/nbd/nbd.o 00:02:44.984 CC lib/scsi/lun.o 00:02:44.984 CC lib/scsi/port.o 00:02:44.984 CC lib/nbd/nbd_rpc.o 00:02:44.984 CC lib/ftl/ftl_core.o 00:02:44.984 CC lib/nvmf/ctrlr_bdev.o 00:02:44.984 CC lib/ublk/ublk_rpc.o 00:02:44.984 CC lib/scsi/scsi.o 00:02:44.984 CC lib/nvmf/subsystem.o 00:02:44.984 CC lib/ftl/ftl_init.o 00:02:44.984 CC lib/nvmf/nvmf.o 00:02:44.984 CC lib/ftl/ftl_layout.o 00:02:44.984 CC lib/scsi/scsi_bdev.o 00:02:44.984 CC lib/scsi/scsi_pr.o 00:02:44.984 CC lib/nvmf/nvmf_rpc.o 00:02:44.984 CC lib/ftl/ftl_debug.o 00:02:44.984 CC lib/scsi/scsi_rpc.o 00:02:44.984 CC lib/nvmf/transport.o 00:02:44.984 CC lib/scsi/task.o 00:02:44.984 CC lib/ftl/ftl_io.o 00:02:44.984 CC lib/ftl/ftl_sb.o 00:02:44.984 CC lib/nvmf/tcp.o 00:02:44.984 CC lib/nvmf/stubs.o 00:02:44.984 CC lib/ftl/ftl_l2p.o 00:02:44.984 CC lib/nvmf/mdns_server.o 00:02:44.984 CC lib/ftl/ftl_l2p_flat.o 00:02:44.984 CC lib/nvmf/vfio_user.o 00:02:44.984 CC lib/ftl/ftl_nv_cache.o 00:02:44.984 CC lib/nvmf/rdma.o 00:02:44.984 CC lib/ftl/ftl_band.o 00:02:44.984 CC lib/nvmf/auth.o 00:02:44.984 CC lib/ftl/ftl_band_ops.o 00:02:44.984 CC lib/ftl/ftl_writer.o 00:02:44.984 CC lib/ftl/ftl_rq.o 00:02:44.984 CC lib/ftl/ftl_reloc.o 00:02:44.984 CC lib/ftl/ftl_l2p_cache.o 00:02:44.984 CC lib/ftl/ftl_p2l.o 00:02:44.984 CC lib/ftl/ftl_p2l_log.o 00:02:44.984 CC lib/ftl/mngt/ftl_mngt.o 00:02:44.984 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:44.984 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:44.984 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:44.984 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:44.984 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:44.984 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:44.984 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:44.984 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:44.984 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:44.984 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:44.984 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:44.984 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:44.984 CC lib/ftl/utils/ftl_conf.o 00:02:44.984 CC lib/ftl/utils/ftl_md.o 00:02:44.984 CC lib/ftl/utils/ftl_mempool.o 00:02:44.984 CC lib/ftl/utils/ftl_bitmap.o 00:02:44.984 CC lib/ftl/utils/ftl_property.o 00:02:44.984 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:44.984 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:44.984 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:44.984 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:44.984 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:44.984 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:44.984 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:44.984 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:44.984 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:44.984 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:44.984 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:44.984 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:44.984 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:44.984 CC lib/ftl/base/ftl_base_dev.o 00:02:44.984 CC lib/ftl/base/ftl_base_bdev.o 00:02:44.984 CC lib/ftl/ftl_trace.o 00:02:45.924 LIB libspdk_nbd.a 00:02:45.924 LIB libspdk_scsi.a 00:02:45.924 SO libspdk_nbd.so.7.0 00:02:45.924 SO libspdk_scsi.so.9.0 00:02:45.924 SYMLINK libspdk_nbd.so 00:02:45.924 SYMLINK libspdk_scsi.so 00:02:45.924 LIB libspdk_ublk.a 00:02:45.924 SO libspdk_ublk.so.3.0 00:02:45.924 SYMLINK libspdk_ublk.so 00:02:45.924 LIB libspdk_ftl.a 00:02:46.182 CC lib/iscsi/conn.o 00:02:46.182 CC lib/iscsi/init_grp.o 00:02:46.182 CC lib/iscsi/param.o 00:02:46.182 CC lib/iscsi/iscsi.o 00:02:46.182 CC lib/iscsi/tgt_node.o 00:02:46.182 SO libspdk_ftl.so.9.0 00:02:46.182 CC lib/iscsi/portal_grp.o 00:02:46.182 CC lib/iscsi/iscsi_rpc.o 00:02:46.182 CC lib/iscsi/iscsi_subsystem.o 00:02:46.182 CC lib/iscsi/task.o 00:02:46.182 CC lib/vhost/vhost.o 00:02:46.182 CC lib/vhost/vhost_rpc.o 00:02:46.182 CC lib/vhost/vhost_scsi.o 00:02:46.182 CC lib/vhost/vhost_blk.o 00:02:46.182 CC lib/vhost/rte_vhost_user.o 00:02:46.441 SYMLINK libspdk_ftl.so 00:02:46.700 LIB libspdk_nvmf.a 00:02:46.958 SO libspdk_nvmf.so.19.0 00:02:46.958 LIB libspdk_vhost.a 00:02:46.958 SO libspdk_vhost.so.8.0 00:02:46.958 SYMLINK libspdk_nvmf.so 00:02:47.218 SYMLINK libspdk_vhost.so 00:02:47.218 LIB libspdk_iscsi.a 00:02:47.218 SO libspdk_iscsi.so.8.0 00:02:47.218 SYMLINK libspdk_iscsi.so 00:02:47.787 CC module/env_dpdk/env_dpdk_rpc.o 00:02:47.787 CC module/vfu_device/vfu_virtio.o 00:02:47.787 CC module/vfu_device/vfu_virtio_blk.o 00:02:47.787 CC module/vfu_device/vfu_virtio_scsi.o 00:02:47.787 CC module/vfu_device/vfu_virtio_fs.o 00:02:47.787 CC module/vfu_device/vfu_virtio_rpc.o 00:02:48.045 CC module/accel/dsa/accel_dsa.o 00:02:48.045 CC module/accel/dsa/accel_dsa_rpc.o 00:02:48.045 CC module/accel/error/accel_error.o 00:02:48.045 CC module/accel/error/accel_error_rpc.o 00:02:48.045 CC module/fsdev/aio/fsdev_aio.o 00:02:48.045 CC module/accel/iaa/accel_iaa.o 00:02:48.045 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:48.045 CC module/accel/iaa/accel_iaa_rpc.o 00:02:48.045 CC module/fsdev/aio/linux_aio_mgr.o 00:02:48.045 CC module/scheduler/gscheduler/gscheduler.o 00:02:48.045 CC module/accel/ioat/accel_ioat.o 00:02:48.045 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:48.045 CC module/blob/bdev/blob_bdev.o 00:02:48.045 CC module/accel/ioat/accel_ioat_rpc.o 00:02:48.045 LIB libspdk_env_dpdk_rpc.a 00:02:48.045 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:48.045 CC module/sock/posix/posix.o 00:02:48.045 CC module/keyring/linux/keyring.o 00:02:48.045 CC module/keyring/file/keyring.o 00:02:48.045 CC module/keyring/linux/keyring_rpc.o 00:02:48.045 CC module/keyring/file/keyring_rpc.o 00:02:48.045 SO libspdk_env_dpdk_rpc.so.6.0 00:02:48.045 SYMLINK libspdk_env_dpdk_rpc.so 00:02:48.045 LIB libspdk_keyring_linux.a 00:02:48.045 LIB libspdk_scheduler_gscheduler.a 00:02:48.045 LIB libspdk_keyring_file.a 00:02:48.045 LIB libspdk_accel_error.a 00:02:48.045 LIB libspdk_scheduler_dpdk_governor.a 00:02:48.045 SO libspdk_keyring_linux.so.1.0 00:02:48.045 LIB libspdk_accel_ioat.a 00:02:48.045 SO libspdk_keyring_file.so.2.0 00:02:48.045 SO libspdk_scheduler_gscheduler.so.4.0 00:02:48.304 LIB libspdk_accel_iaa.a 00:02:48.304 LIB libspdk_scheduler_dynamic.a 00:02:48.304 SO libspdk_accel_error.so.2.0 00:02:48.304 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:48.304 SO libspdk_accel_ioat.so.6.0 00:02:48.304 SYMLINK libspdk_keyring_linux.so 00:02:48.304 SO libspdk_accel_iaa.so.3.0 00:02:48.304 SO libspdk_scheduler_dynamic.so.4.0 00:02:48.304 LIB libspdk_accel_dsa.a 00:02:48.304 SYMLINK libspdk_scheduler_gscheduler.so 00:02:48.304 SYMLINK libspdk_keyring_file.so 00:02:48.304 LIB libspdk_blob_bdev.a 00:02:48.304 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:48.304 SYMLINK libspdk_accel_error.so 00:02:48.304 SO libspdk_accel_dsa.so.5.0 00:02:48.304 SYMLINK libspdk_accel_ioat.so 00:02:48.304 SO libspdk_blob_bdev.so.11.0 00:02:48.304 SYMLINK libspdk_accel_iaa.so 00:02:48.304 SYMLINK libspdk_scheduler_dynamic.so 00:02:48.304 SYMLINK libspdk_accel_dsa.so 00:02:48.304 SYMLINK libspdk_blob_bdev.so 00:02:48.304 LIB libspdk_vfu_device.a 00:02:48.304 SO libspdk_vfu_device.so.3.0 00:02:48.564 SYMLINK libspdk_vfu_device.so 00:02:48.564 LIB libspdk_fsdev_aio.a 00:02:48.564 SO libspdk_fsdev_aio.so.1.0 00:02:48.564 LIB libspdk_sock_posix.a 00:02:48.564 SYMLINK libspdk_fsdev_aio.so 00:02:48.564 SO libspdk_sock_posix.so.6.0 00:02:48.823 SYMLINK libspdk_sock_posix.so 00:02:48.823 CC module/bdev/error/vbdev_error.o 00:02:48.823 CC module/bdev/error/vbdev_error_rpc.o 00:02:48.823 CC module/bdev/null/bdev_null.o 00:02:48.823 CC module/bdev/null/bdev_null_rpc.o 00:02:48.823 CC module/bdev/lvol/vbdev_lvol.o 00:02:48.823 CC module/bdev/gpt/gpt.o 00:02:48.823 CC module/bdev/malloc/bdev_malloc.o 00:02:48.823 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:48.823 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:48.823 CC module/bdev/gpt/vbdev_gpt.o 00:02:48.823 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:48.823 CC module/bdev/nvme/bdev_nvme.o 00:02:48.823 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:48.823 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:48.823 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:48.823 CC module/bdev/nvme/nvme_rpc.o 00:02:48.823 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:48.823 CC module/bdev/nvme/bdev_mdns_client.o 00:02:48.823 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:48.823 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:48.823 CC module/bdev/nvme/vbdev_opal.o 00:02:48.823 CC module/bdev/split/vbdev_split_rpc.o 00:02:48.823 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:48.823 CC module/bdev/split/vbdev_split.o 00:02:48.823 CC module/bdev/delay/vbdev_delay.o 00:02:48.823 CC module/bdev/passthru/vbdev_passthru.o 00:02:48.823 CC module/blobfs/bdev/blobfs_bdev.o 00:02:48.823 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:48.823 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:48.823 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:48.823 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:48.823 CC module/bdev/ftl/bdev_ftl.o 00:02:48.823 CC module/bdev/iscsi/bdev_iscsi.o 00:02:48.823 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:48.823 CC module/bdev/aio/bdev_aio_rpc.o 00:02:48.823 CC module/bdev/aio/bdev_aio.o 00:02:48.823 CC module/bdev/raid/bdev_raid.o 00:02:48.823 CC module/bdev/raid/bdev_raid_sb.o 00:02:48.823 CC module/bdev/raid/bdev_raid_rpc.o 00:02:48.823 CC module/bdev/raid/raid0.o 00:02:48.823 CC module/bdev/raid/concat.o 00:02:48.823 CC module/bdev/raid/raid1.o 00:02:49.082 LIB libspdk_blobfs_bdev.a 00:02:49.082 LIB libspdk_bdev_error.a 00:02:49.082 LIB libspdk_bdev_gpt.a 00:02:49.082 LIB libspdk_bdev_split.a 00:02:49.082 LIB libspdk_bdev_null.a 00:02:49.082 SO libspdk_blobfs_bdev.so.6.0 00:02:49.082 SO libspdk_bdev_gpt.so.6.0 00:02:49.082 SO libspdk_bdev_error.so.6.0 00:02:49.082 SO libspdk_bdev_split.so.6.0 00:02:49.082 SO libspdk_bdev_null.so.6.0 00:02:49.082 LIB libspdk_bdev_passthru.a 00:02:49.082 SYMLINK libspdk_blobfs_bdev.so 00:02:49.082 LIB libspdk_bdev_ftl.a 00:02:49.082 SYMLINK libspdk_bdev_gpt.so 00:02:49.082 LIB libspdk_bdev_aio.a 00:02:49.082 LIB libspdk_bdev_malloc.a 00:02:49.082 SYMLINK libspdk_bdev_error.so 00:02:49.082 SO libspdk_bdev_passthru.so.6.0 00:02:49.082 LIB libspdk_bdev_delay.a 00:02:49.082 LIB libspdk_bdev_zone_block.a 00:02:49.082 LIB libspdk_bdev_iscsi.a 00:02:49.082 SYMLINK libspdk_bdev_split.so 00:02:49.082 SO libspdk_bdev_aio.so.6.0 00:02:49.340 SO libspdk_bdev_ftl.so.6.0 00:02:49.340 SO libspdk_bdev_malloc.so.6.0 00:02:49.340 SYMLINK libspdk_bdev_null.so 00:02:49.340 SO libspdk_bdev_delay.so.6.0 00:02:49.340 SO libspdk_bdev_zone_block.so.6.0 00:02:49.340 SO libspdk_bdev_iscsi.so.6.0 00:02:49.340 SYMLINK libspdk_bdev_passthru.so 00:02:49.340 SYMLINK libspdk_bdev_ftl.so 00:02:49.340 SYMLINK libspdk_bdev_aio.so 00:02:49.340 SYMLINK libspdk_bdev_malloc.so 00:02:49.340 SYMLINK libspdk_bdev_zone_block.so 00:02:49.340 SYMLINK libspdk_bdev_delay.so 00:02:49.340 SYMLINK libspdk_bdev_iscsi.so 00:02:49.340 LIB libspdk_bdev_virtio.a 00:02:49.340 LIB libspdk_bdev_lvol.a 00:02:49.340 SO libspdk_bdev_lvol.so.6.0 00:02:49.340 SO libspdk_bdev_virtio.so.6.0 00:02:49.340 SYMLINK libspdk_bdev_virtio.so 00:02:49.340 SYMLINK libspdk_bdev_lvol.so 00:02:49.599 LIB libspdk_bdev_raid.a 00:02:49.599 SO libspdk_bdev_raid.so.6.0 00:02:49.858 SYMLINK libspdk_bdev_raid.so 00:02:50.428 LIB libspdk_bdev_nvme.a 00:02:50.428 SO libspdk_bdev_nvme.so.7.0 00:02:50.687 SYMLINK libspdk_bdev_nvme.so 00:02:51.256 CC module/event/subsystems/vmd/vmd.o 00:02:51.256 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:51.256 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:51.256 CC module/event/subsystems/iobuf/iobuf.o 00:02:51.256 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:51.256 CC module/event/subsystems/scheduler/scheduler.o 00:02:51.256 CC module/event/subsystems/keyring/keyring.o 00:02:51.256 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:51.256 CC module/event/subsystems/sock/sock.o 00:02:51.256 CC module/event/subsystems/fsdev/fsdev.o 00:02:51.515 LIB libspdk_event_keyring.a 00:02:51.515 LIB libspdk_event_vfu_tgt.a 00:02:51.515 LIB libspdk_event_vhost_blk.a 00:02:51.515 LIB libspdk_event_vmd.a 00:02:51.515 LIB libspdk_event_fsdev.a 00:02:51.515 LIB libspdk_event_sock.a 00:02:51.515 LIB libspdk_event_iobuf.a 00:02:51.515 SO libspdk_event_vfu_tgt.so.3.0 00:02:51.515 LIB libspdk_event_scheduler.a 00:02:51.515 SO libspdk_event_keyring.so.1.0 00:02:51.515 SO libspdk_event_vhost_blk.so.3.0 00:02:51.515 SO libspdk_event_sock.so.5.0 00:02:51.515 SO libspdk_event_iobuf.so.3.0 00:02:51.515 SO libspdk_event_fsdev.so.1.0 00:02:51.515 SO libspdk_event_vmd.so.6.0 00:02:51.515 SO libspdk_event_scheduler.so.4.0 00:02:51.515 SYMLINK libspdk_event_keyring.so 00:02:51.515 SYMLINK libspdk_event_vfu_tgt.so 00:02:51.515 SYMLINK libspdk_event_vhost_blk.so 00:02:51.515 SYMLINK libspdk_event_fsdev.so 00:02:51.515 SYMLINK libspdk_event_sock.so 00:02:51.515 SYMLINK libspdk_event_scheduler.so 00:02:51.515 SYMLINK libspdk_event_vmd.so 00:02:51.515 SYMLINK libspdk_event_iobuf.so 00:02:51.827 CC module/event/subsystems/accel/accel.o 00:02:52.086 LIB libspdk_event_accel.a 00:02:52.086 SO libspdk_event_accel.so.6.0 00:02:52.086 SYMLINK libspdk_event_accel.so 00:02:52.345 CC module/event/subsystems/bdev/bdev.o 00:02:52.605 LIB libspdk_event_bdev.a 00:02:52.605 SO libspdk_event_bdev.so.6.0 00:02:52.605 SYMLINK libspdk_event_bdev.so 00:02:52.863 CC module/event/subsystems/scsi/scsi.o 00:02:52.863 CC module/event/subsystems/nbd/nbd.o 00:02:52.863 CC module/event/subsystems/ublk/ublk.o 00:02:52.863 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:52.863 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:53.123 LIB libspdk_event_nbd.a 00:02:53.123 LIB libspdk_event_ublk.a 00:02:53.123 LIB libspdk_event_scsi.a 00:02:53.123 SO libspdk_event_nbd.so.6.0 00:02:53.123 SO libspdk_event_ublk.so.3.0 00:02:53.123 SO libspdk_event_scsi.so.6.0 00:02:53.123 LIB libspdk_event_nvmf.a 00:02:53.123 SYMLINK libspdk_event_nbd.so 00:02:53.123 SYMLINK libspdk_event_ublk.so 00:02:53.123 SYMLINK libspdk_event_scsi.so 00:02:53.123 SO libspdk_event_nvmf.so.6.0 00:02:53.123 SYMLINK libspdk_event_nvmf.so 00:02:53.382 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:53.382 CC module/event/subsystems/iscsi/iscsi.o 00:02:53.699 LIB libspdk_event_iscsi.a 00:02:53.699 LIB libspdk_event_vhost_scsi.a 00:02:53.699 SO libspdk_event_iscsi.so.6.0 00:02:53.699 SO libspdk_event_vhost_scsi.so.3.0 00:02:53.699 SYMLINK libspdk_event_iscsi.so 00:02:53.699 SYMLINK libspdk_event_vhost_scsi.so 00:02:54.017 SO libspdk.so.6.0 00:02:54.017 SYMLINK libspdk.so 00:02:54.323 CC app/trace_record/trace_record.o 00:02:54.323 CC app/spdk_nvme_discover/discovery_aer.o 00:02:54.323 TEST_HEADER include/spdk/accel.h 00:02:54.323 TEST_HEADER include/spdk/accel_module.h 00:02:54.323 TEST_HEADER include/spdk/assert.h 00:02:54.323 CC app/spdk_nvme_identify/identify.o 00:02:54.323 TEST_HEADER include/spdk/base64.h 00:02:54.323 CC test/rpc_client/rpc_client_test.o 00:02:54.323 TEST_HEADER include/spdk/barrier.h 00:02:54.323 TEST_HEADER include/spdk/bdev.h 00:02:54.323 CC app/spdk_nvme_perf/perf.o 00:02:54.323 CXX app/trace/trace.o 00:02:54.323 TEST_HEADER include/spdk/bdev_module.h 00:02:54.323 CC app/spdk_lspci/spdk_lspci.o 00:02:54.323 TEST_HEADER include/spdk/bdev_zone.h 00:02:54.323 TEST_HEADER include/spdk/bit_array.h 00:02:54.323 TEST_HEADER include/spdk/bit_pool.h 00:02:54.324 TEST_HEADER include/spdk/blob_bdev.h 00:02:54.324 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:54.324 TEST_HEADER include/spdk/conf.h 00:02:54.324 TEST_HEADER include/spdk/blob.h 00:02:54.324 TEST_HEADER include/spdk/blobfs.h 00:02:54.324 CC app/spdk_top/spdk_top.o 00:02:54.324 TEST_HEADER include/spdk/config.h 00:02:54.324 TEST_HEADER include/spdk/cpuset.h 00:02:54.324 TEST_HEADER include/spdk/crc16.h 00:02:54.324 TEST_HEADER include/spdk/crc32.h 00:02:54.324 TEST_HEADER include/spdk/crc64.h 00:02:54.324 TEST_HEADER include/spdk/dif.h 00:02:54.324 TEST_HEADER include/spdk/dma.h 00:02:54.324 TEST_HEADER include/spdk/env.h 00:02:54.324 TEST_HEADER include/spdk/env_dpdk.h 00:02:54.324 TEST_HEADER include/spdk/endian.h 00:02:54.324 TEST_HEADER include/spdk/event.h 00:02:54.324 TEST_HEADER include/spdk/fd_group.h 00:02:54.324 TEST_HEADER include/spdk/file.h 00:02:54.324 TEST_HEADER include/spdk/fd.h 00:02:54.324 TEST_HEADER include/spdk/fsdev.h 00:02:54.324 TEST_HEADER include/spdk/fsdev_module.h 00:02:54.324 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:54.324 TEST_HEADER include/spdk/gpt_spec.h 00:02:54.324 TEST_HEADER include/spdk/ftl.h 00:02:54.324 TEST_HEADER include/spdk/hexlify.h 00:02:54.324 TEST_HEADER include/spdk/histogram_data.h 00:02:54.324 TEST_HEADER include/spdk/idxd.h 00:02:54.324 TEST_HEADER include/spdk/idxd_spec.h 00:02:54.324 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:54.324 TEST_HEADER include/spdk/init.h 00:02:54.324 TEST_HEADER include/spdk/ioat.h 00:02:54.324 TEST_HEADER include/spdk/iscsi_spec.h 00:02:54.324 TEST_HEADER include/spdk/ioat_spec.h 00:02:54.324 TEST_HEADER include/spdk/jsonrpc.h 00:02:54.324 TEST_HEADER include/spdk/json.h 00:02:54.324 TEST_HEADER include/spdk/keyring.h 00:02:54.324 TEST_HEADER include/spdk/keyring_module.h 00:02:54.324 TEST_HEADER include/spdk/log.h 00:02:54.324 TEST_HEADER include/spdk/lvol.h 00:02:54.324 TEST_HEADER include/spdk/md5.h 00:02:54.324 TEST_HEADER include/spdk/likely.h 00:02:54.324 TEST_HEADER include/spdk/memory.h 00:02:54.324 CC app/spdk_dd/spdk_dd.o 00:02:54.324 TEST_HEADER include/spdk/nbd.h 00:02:54.324 TEST_HEADER include/spdk/mmio.h 00:02:54.324 CC app/iscsi_tgt/iscsi_tgt.o 00:02:54.324 TEST_HEADER include/spdk/notify.h 00:02:54.324 CC app/nvmf_tgt/nvmf_main.o 00:02:54.324 TEST_HEADER include/spdk/nvme.h 00:02:54.324 TEST_HEADER include/spdk/net.h 00:02:54.324 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:54.324 TEST_HEADER include/spdk/nvme_intel.h 00:02:54.324 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:54.324 TEST_HEADER include/spdk/nvme_zns.h 00:02:54.324 TEST_HEADER include/spdk/nvme_spec.h 00:02:54.324 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:54.324 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:54.324 TEST_HEADER include/spdk/nvmf_spec.h 00:02:54.324 TEST_HEADER include/spdk/nvmf.h 00:02:54.324 TEST_HEADER include/spdk/nvmf_transport.h 00:02:54.324 TEST_HEADER include/spdk/opal_spec.h 00:02:54.324 TEST_HEADER include/spdk/pci_ids.h 00:02:54.324 TEST_HEADER include/spdk/opal.h 00:02:54.324 TEST_HEADER include/spdk/pipe.h 00:02:54.324 TEST_HEADER include/spdk/queue.h 00:02:54.324 TEST_HEADER include/spdk/reduce.h 00:02:54.324 TEST_HEADER include/spdk/scheduler.h 00:02:54.324 TEST_HEADER include/spdk/rpc.h 00:02:54.324 TEST_HEADER include/spdk/scsi.h 00:02:54.324 TEST_HEADER include/spdk/scsi_spec.h 00:02:54.324 TEST_HEADER include/spdk/string.h 00:02:54.324 TEST_HEADER include/spdk/stdinc.h 00:02:54.324 TEST_HEADER include/spdk/trace.h 00:02:54.324 TEST_HEADER include/spdk/sock.h 00:02:54.324 TEST_HEADER include/spdk/thread.h 00:02:54.324 TEST_HEADER include/spdk/trace_parser.h 00:02:54.324 TEST_HEADER include/spdk/tree.h 00:02:54.324 TEST_HEADER include/spdk/ublk.h 00:02:54.324 TEST_HEADER include/spdk/util.h 00:02:54.324 TEST_HEADER include/spdk/uuid.h 00:02:54.324 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:54.324 TEST_HEADER include/spdk/version.h 00:02:54.324 TEST_HEADER include/spdk/vhost.h 00:02:54.324 TEST_HEADER include/spdk/vmd.h 00:02:54.324 TEST_HEADER include/spdk/xor.h 00:02:54.324 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:54.324 TEST_HEADER include/spdk/zipf.h 00:02:54.324 CXX test/cpp_headers/accel_module.o 00:02:54.324 CXX test/cpp_headers/accel.o 00:02:54.324 CXX test/cpp_headers/barrier.o 00:02:54.324 CXX test/cpp_headers/assert.o 00:02:54.324 CXX test/cpp_headers/bdev.o 00:02:54.324 CXX test/cpp_headers/bdev_module.o 00:02:54.324 CXX test/cpp_headers/base64.o 00:02:54.324 CXX test/cpp_headers/bdev_zone.o 00:02:54.324 CXX test/cpp_headers/bit_pool.o 00:02:54.324 CXX test/cpp_headers/blob_bdev.o 00:02:54.324 CXX test/cpp_headers/bit_array.o 00:02:54.324 CXX test/cpp_headers/blobfs.o 00:02:54.324 CXX test/cpp_headers/blobfs_bdev.o 00:02:54.324 CXX test/cpp_headers/blob.o 00:02:54.324 CXX test/cpp_headers/config.o 00:02:54.324 CXX test/cpp_headers/conf.o 00:02:54.324 CXX test/cpp_headers/crc16.o 00:02:54.324 CXX test/cpp_headers/crc64.o 00:02:54.324 CXX test/cpp_headers/cpuset.o 00:02:54.324 CXX test/cpp_headers/crc32.o 00:02:54.324 CXX test/cpp_headers/dif.o 00:02:54.324 CXX test/cpp_headers/endian.o 00:02:54.324 CC app/spdk_tgt/spdk_tgt.o 00:02:54.324 CXX test/cpp_headers/dma.o 00:02:54.324 CXX test/cpp_headers/env.o 00:02:54.324 CXX test/cpp_headers/fd_group.o 00:02:54.324 CXX test/cpp_headers/env_dpdk.o 00:02:54.324 CXX test/cpp_headers/event.o 00:02:54.324 CXX test/cpp_headers/fd.o 00:02:54.324 CXX test/cpp_headers/file.o 00:02:54.324 CXX test/cpp_headers/fsdev_module.o 00:02:54.324 CXX test/cpp_headers/fsdev.o 00:02:54.324 CXX test/cpp_headers/ftl.o 00:02:54.324 CXX test/cpp_headers/hexlify.o 00:02:54.324 CXX test/cpp_headers/fuse_dispatcher.o 00:02:54.324 CXX test/cpp_headers/histogram_data.o 00:02:54.324 CXX test/cpp_headers/gpt_spec.o 00:02:54.324 CXX test/cpp_headers/idxd.o 00:02:54.324 CXX test/cpp_headers/idxd_spec.o 00:02:54.324 CXX test/cpp_headers/init.o 00:02:54.324 CXX test/cpp_headers/iscsi_spec.o 00:02:54.324 CXX test/cpp_headers/ioat.o 00:02:54.324 CXX test/cpp_headers/ioat_spec.o 00:02:54.324 CXX test/cpp_headers/jsonrpc.o 00:02:54.324 CXX test/cpp_headers/json.o 00:02:54.324 CXX test/cpp_headers/keyring.o 00:02:54.324 CXX test/cpp_headers/likely.o 00:02:54.324 CXX test/cpp_headers/keyring_module.o 00:02:54.324 CXX test/cpp_headers/log.o 00:02:54.324 CXX test/cpp_headers/lvol.o 00:02:54.324 CXX test/cpp_headers/memory.o 00:02:54.324 CXX test/cpp_headers/md5.o 00:02:54.324 CXX test/cpp_headers/mmio.o 00:02:54.324 CXX test/cpp_headers/notify.o 00:02:54.324 CXX test/cpp_headers/net.o 00:02:54.324 CXX test/cpp_headers/nbd.o 00:02:54.324 CXX test/cpp_headers/nvme.o 00:02:54.324 CXX test/cpp_headers/nvme_intel.o 00:02:54.324 CXX test/cpp_headers/nvme_ocssd.o 00:02:54.324 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:54.324 CXX test/cpp_headers/nvme_spec.o 00:02:54.324 CXX test/cpp_headers/nvme_zns.o 00:02:54.324 CXX test/cpp_headers/nvmf_cmd.o 00:02:54.324 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:54.324 CXX test/cpp_headers/nvmf.o 00:02:54.324 CXX test/cpp_headers/nvmf_spec.o 00:02:54.324 CXX test/cpp_headers/nvmf_transport.o 00:02:54.324 CC examples/util/zipf/zipf.o 00:02:54.324 CXX test/cpp_headers/opal.o 00:02:54.324 CC examples/ioat/perf/perf.o 00:02:54.608 CC examples/ioat/verify/verify.o 00:02:54.608 CC test/env/pci/pci_ut.o 00:02:54.608 CC test/env/memory/memory_ut.o 00:02:54.608 CXX test/cpp_headers/opal_spec.o 00:02:54.608 CC test/env/vtophys/vtophys.o 00:02:54.608 CC test/app/jsoncat/jsoncat.o 00:02:54.608 CC test/dma/test_dma/test_dma.o 00:02:54.608 CC test/app/histogram_perf/histogram_perf.o 00:02:54.608 CC test/app/stub/stub.o 00:02:54.608 CC test/thread/poller_perf/poller_perf.o 00:02:54.608 CC app/fio/nvme/fio_plugin.o 00:02:54.608 CC app/fio/bdev/fio_plugin.o 00:02:54.608 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:54.608 CC test/app/bdev_svc/bdev_svc.o 00:02:54.608 LINK spdk_lspci 00:02:54.874 LINK interrupt_tgt 00:02:54.874 LINK nvmf_tgt 00:02:54.874 CC test/env/mem_callbacks/mem_callbacks.o 00:02:54.874 LINK spdk_nvme_discover 00:02:54.874 LINK zipf 00:02:54.874 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:54.874 LINK rpc_client_test 00:02:54.874 CXX test/cpp_headers/pci_ids.o 00:02:54.874 LINK vtophys 00:02:54.874 CXX test/cpp_headers/pipe.o 00:02:54.874 CXX test/cpp_headers/queue.o 00:02:54.874 CXX test/cpp_headers/reduce.o 00:02:55.138 LINK histogram_perf 00:02:55.138 CXX test/cpp_headers/rpc.o 00:02:55.138 CXX test/cpp_headers/scheduler.o 00:02:55.138 CXX test/cpp_headers/scsi.o 00:02:55.138 CXX test/cpp_headers/scsi_spec.o 00:02:55.138 CXX test/cpp_headers/sock.o 00:02:55.138 CXX test/cpp_headers/stdinc.o 00:02:55.138 CXX test/cpp_headers/string.o 00:02:55.138 CXX test/cpp_headers/thread.o 00:02:55.138 LINK spdk_trace_record 00:02:55.138 CXX test/cpp_headers/trace.o 00:02:55.138 CXX test/cpp_headers/trace_parser.o 00:02:55.138 CXX test/cpp_headers/tree.o 00:02:55.138 CXX test/cpp_headers/ublk.o 00:02:55.138 CXX test/cpp_headers/util.o 00:02:55.138 CXX test/cpp_headers/uuid.o 00:02:55.138 CXX test/cpp_headers/version.o 00:02:55.138 CXX test/cpp_headers/vfio_user_spec.o 00:02:55.138 CXX test/cpp_headers/vmd.o 00:02:55.138 CXX test/cpp_headers/vhost.o 00:02:55.138 CXX test/cpp_headers/xor.o 00:02:55.138 CXX test/cpp_headers/vfio_user_pci.o 00:02:55.138 LINK iscsi_tgt 00:02:55.138 CXX test/cpp_headers/zipf.o 00:02:55.138 LINK ioat_perf 00:02:55.138 LINK jsoncat 00:02:55.138 LINK bdev_svc 00:02:55.138 LINK poller_perf 00:02:55.138 LINK spdk_tgt 00:02:55.138 LINK spdk_dd 00:02:55.138 LINK env_dpdk_post_init 00:02:55.138 LINK stub 00:02:55.138 LINK verify 00:02:55.138 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:55.138 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:55.138 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:55.138 LINK spdk_trace 00:02:55.398 LINK pci_ut 00:02:55.398 LINK test_dma 00:02:55.398 CC examples/idxd/perf/perf.o 00:02:55.398 CC examples/vmd/led/led.o 00:02:55.398 CC examples/vmd/lsvmd/lsvmd.o 00:02:55.398 CC examples/sock/hello_world/hello_sock.o 00:02:55.398 LINK spdk_nvme_perf 00:02:55.398 CC examples/thread/thread/thread_ex.o 00:02:55.398 LINK spdk_nvme 00:02:55.657 CC test/event/reactor/reactor.o 00:02:55.657 LINK nvme_fuzz 00:02:55.657 CC test/event/reactor_perf/reactor_perf.o 00:02:55.657 CC test/event/event_perf/event_perf.o 00:02:55.657 LINK spdk_bdev 00:02:55.657 CC test/event/app_repeat/app_repeat.o 00:02:55.657 CC app/vhost/vhost.o 00:02:55.657 CC test/event/scheduler/scheduler.o 00:02:55.657 LINK lsvmd 00:02:55.657 LINK spdk_nvme_identify 00:02:55.657 LINK led 00:02:55.657 LINK reactor 00:02:55.657 LINK spdk_top 00:02:55.657 LINK mem_callbacks 00:02:55.657 LINK hello_sock 00:02:55.657 LINK vhost_fuzz 00:02:55.657 LINK event_perf 00:02:55.657 LINK reactor_perf 00:02:55.657 LINK app_repeat 00:02:55.657 LINK idxd_perf 00:02:55.657 LINK thread 00:02:55.915 LINK vhost 00:02:55.915 LINK scheduler 00:02:55.915 CC test/nvme/sgl/sgl.o 00:02:55.915 CC test/nvme/boot_partition/boot_partition.o 00:02:55.915 CC test/nvme/reserve/reserve.o 00:02:55.915 CC test/nvme/connect_stress/connect_stress.o 00:02:55.915 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:55.915 CC test/nvme/err_injection/err_injection.o 00:02:55.915 CC test/nvme/cuse/cuse.o 00:02:55.915 CC test/nvme/startup/startup.o 00:02:55.915 CC test/nvme/compliance/nvme_compliance.o 00:02:55.915 CC test/nvme/e2edp/nvme_dp.o 00:02:55.915 CC test/nvme/fused_ordering/fused_ordering.o 00:02:55.915 CC test/nvme/simple_copy/simple_copy.o 00:02:55.915 CC test/nvme/aer/aer.o 00:02:55.915 CC test/nvme/reset/reset.o 00:02:55.915 CC test/nvme/overhead/overhead.o 00:02:55.915 CC test/nvme/fdp/fdp.o 00:02:55.915 CC test/accel/dif/dif.o 00:02:55.915 CC test/blobfs/mkfs/mkfs.o 00:02:55.915 CC test/lvol/esnap/esnap.o 00:02:55.915 LINK boot_partition 00:02:56.174 LINK connect_stress 00:02:56.174 LINK reserve 00:02:56.174 LINK memory_ut 00:02:56.174 LINK doorbell_aers 00:02:56.174 LINK err_injection 00:02:56.174 LINK startup 00:02:56.174 LINK fused_ordering 00:02:56.174 LINK simple_copy 00:02:56.174 LINK sgl 00:02:56.174 LINK reset 00:02:56.174 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:56.174 CC examples/nvme/arbitration/arbitration.o 00:02:56.174 CC examples/nvme/reconnect/reconnect.o 00:02:56.174 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:56.174 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:56.174 CC examples/nvme/abort/abort.o 00:02:56.174 CC examples/nvme/hello_world/hello_world.o 00:02:56.174 CC examples/nvme/hotplug/hotplug.o 00:02:56.174 LINK aer 00:02:56.174 LINK overhead 00:02:56.174 LINK nvme_dp 00:02:56.174 LINK mkfs 00:02:56.174 LINK nvme_compliance 00:02:56.174 LINK fdp 00:02:56.174 CC examples/accel/perf/accel_perf.o 00:02:56.433 CC examples/blob/cli/blobcli.o 00:02:56.433 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:56.433 LINK pmr_persistence 00:02:56.433 LINK cmb_copy 00:02:56.433 CC examples/blob/hello_world/hello_blob.o 00:02:56.433 LINK hello_world 00:02:56.433 LINK hotplug 00:02:56.433 LINK arbitration 00:02:56.433 LINK reconnect 00:02:56.433 LINK abort 00:02:56.433 LINK dif 00:02:56.433 LINK nvme_manage 00:02:56.698 LINK hello_blob 00:02:56.698 LINK hello_fsdev 00:02:56.698 LINK iscsi_fuzz 00:02:56.698 LINK accel_perf 00:02:56.698 LINK blobcli 00:02:56.963 LINK cuse 00:02:56.963 CC test/bdev/bdevio/bdevio.o 00:02:57.222 CC examples/bdev/hello_world/hello_bdev.o 00:02:57.222 CC examples/bdev/bdevperf/bdevperf.o 00:02:57.222 LINK bdevio 00:02:57.481 LINK hello_bdev 00:02:57.740 LINK bdevperf 00:02:58.308 CC examples/nvmf/nvmf/nvmf.o 00:02:58.567 LINK nvmf 00:02:59.505 LINK esnap 00:02:59.765 00:02:59.765 real 0m54.842s 00:02:59.765 user 8m16.436s 00:02:59.765 sys 3m43.084s 00:02:59.765 18:10:52 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:59.765 18:10:52 make -- common/autotest_common.sh@10 -- $ set +x 00:02:59.765 ************************************ 00:02:59.765 END TEST make 00:02:59.765 ************************************ 00:02:59.765 18:10:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:59.765 18:10:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:59.765 18:10:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:59.765 18:10:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.765 18:10:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:59.765 18:10:53 -- pm/common@44 -- $ pid=142389 00:02:59.765 18:10:53 -- pm/common@50 -- $ kill -TERM 142389 00:02:59.765 18:10:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.765 18:10:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:59.765 18:10:53 -- pm/common@44 -- $ pid=142391 00:02:59.765 18:10:53 -- pm/common@50 -- $ kill -TERM 142391 00:02:59.765 18:10:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.765 18:10:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:59.765 18:10:53 -- pm/common@44 -- $ pid=142392 00:02:59.765 18:10:53 -- pm/common@50 -- $ kill -TERM 142392 00:02:59.765 18:10:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.765 18:10:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:59.765 18:10:53 -- pm/common@44 -- $ pid=142418 00:02:59.766 18:10:53 -- pm/common@50 -- $ sudo -E kill -TERM 142418 00:03:00.025 18:10:53 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:00.025 18:10:53 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:00.025 18:10:53 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:00.025 18:10:53 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:00.025 18:10:53 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:00.025 18:10:53 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:00.025 18:10:53 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:00.025 18:10:53 -- scripts/common.sh@336 -- # IFS=.-: 00:03:00.025 18:10:53 -- scripts/common.sh@336 -- # read -ra ver1 00:03:00.025 18:10:53 -- scripts/common.sh@337 -- # IFS=.-: 00:03:00.025 18:10:53 -- scripts/common.sh@337 -- # read -ra ver2 00:03:00.025 18:10:53 -- scripts/common.sh@338 -- # local 'op=<' 00:03:00.025 18:10:53 -- scripts/common.sh@340 -- # ver1_l=2 00:03:00.025 18:10:53 -- scripts/common.sh@341 -- # ver2_l=1 00:03:00.025 18:10:53 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:00.025 18:10:53 -- scripts/common.sh@344 -- # case "$op" in 00:03:00.025 18:10:53 -- scripts/common.sh@345 -- # : 1 00:03:00.025 18:10:53 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:00.025 18:10:53 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.025 18:10:53 -- scripts/common.sh@365 -- # decimal 1 00:03:00.025 18:10:53 -- scripts/common.sh@353 -- # local d=1 00:03:00.025 18:10:53 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:00.025 18:10:53 -- scripts/common.sh@355 -- # echo 1 00:03:00.025 18:10:53 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:00.025 18:10:53 -- scripts/common.sh@366 -- # decimal 2 00:03:00.025 18:10:53 -- scripts/common.sh@353 -- # local d=2 00:03:00.025 18:10:53 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:00.025 18:10:53 -- scripts/common.sh@355 -- # echo 2 00:03:00.025 18:10:53 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:00.025 18:10:53 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:00.025 18:10:53 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:00.025 18:10:53 -- scripts/common.sh@368 -- # return 0 00:03:00.025 18:10:53 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:00.025 18:10:53 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:00.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.025 --rc genhtml_branch_coverage=1 00:03:00.025 --rc genhtml_function_coverage=1 00:03:00.025 --rc genhtml_legend=1 00:03:00.026 --rc geninfo_all_blocks=1 00:03:00.026 --rc geninfo_unexecuted_blocks=1 00:03:00.026 00:03:00.026 ' 00:03:00.026 18:10:53 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:00.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.026 --rc genhtml_branch_coverage=1 00:03:00.026 --rc genhtml_function_coverage=1 00:03:00.026 --rc genhtml_legend=1 00:03:00.026 --rc geninfo_all_blocks=1 00:03:00.026 --rc geninfo_unexecuted_blocks=1 00:03:00.026 00:03:00.026 ' 00:03:00.026 18:10:53 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:00.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.026 --rc genhtml_branch_coverage=1 00:03:00.026 --rc genhtml_function_coverage=1 00:03:00.026 --rc genhtml_legend=1 00:03:00.026 --rc geninfo_all_blocks=1 00:03:00.026 --rc geninfo_unexecuted_blocks=1 00:03:00.026 00:03:00.026 ' 00:03:00.026 18:10:53 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:00.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.026 --rc genhtml_branch_coverage=1 00:03:00.026 --rc genhtml_function_coverage=1 00:03:00.026 --rc genhtml_legend=1 00:03:00.026 --rc geninfo_all_blocks=1 00:03:00.026 --rc geninfo_unexecuted_blocks=1 00:03:00.026 00:03:00.026 ' 00:03:00.026 18:10:53 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:00.026 18:10:53 -- nvmf/common.sh@7 -- # uname -s 00:03:00.026 18:10:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:00.026 18:10:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:00.026 18:10:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:00.026 18:10:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:00.026 18:10:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:00.026 18:10:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:00.026 18:10:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:00.026 18:10:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:00.026 18:10:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:00.026 18:10:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:00.026 18:10:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:03:00.026 18:10:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:03:00.026 18:10:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:00.026 18:10:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:00.026 18:10:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:00.026 18:10:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:00.026 18:10:53 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:00.026 18:10:53 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:00.026 18:10:53 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:00.026 18:10:53 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:00.026 18:10:53 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:00.026 18:10:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.026 18:10:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.026 18:10:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.026 18:10:53 -- paths/export.sh@5 -- # export PATH 00:03:00.026 18:10:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.026 18:10:53 -- nvmf/common.sh@51 -- # : 0 00:03:00.026 18:10:53 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:00.026 18:10:53 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:00.026 18:10:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:00.026 18:10:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:00.026 18:10:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:00.026 18:10:53 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:00.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:00.026 18:10:53 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:00.026 18:10:53 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:00.026 18:10:53 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:00.026 18:10:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:00.026 18:10:53 -- spdk/autotest.sh@32 -- # uname -s 00:03:00.026 18:10:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:00.026 18:10:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:00.026 18:10:53 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:00.026 18:10:53 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:00.026 18:10:53 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:00.026 18:10:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:00.026 18:10:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:00.026 18:10:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:00.026 18:10:53 -- spdk/autotest.sh@48 -- # udevadm_pid=204634 00:03:00.026 18:10:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:00.026 18:10:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:00.026 18:10:53 -- pm/common@17 -- # local monitor 00:03:00.026 18:10:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.026 18:10:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.026 18:10:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.026 18:10:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.026 18:10:53 -- pm/common@21 -- # date +%s 00:03:00.026 18:10:53 -- pm/common@21 -- # date +%s 00:03:00.026 18:10:53 -- pm/common@25 -- # sleep 1 00:03:00.026 18:10:53 -- pm/common@21 -- # date +%s 00:03:00.026 18:10:53 -- pm/common@21 -- # date +%s 00:03:00.026 18:10:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728403853 00:03:00.026 18:10:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728403853 00:03:00.026 18:10:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728403853 00:03:00.026 18:10:53 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728403853 00:03:00.026 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728403853_collect-cpu-load.pm.log 00:03:00.026 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728403853_collect-vmstat.pm.log 00:03:00.026 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728403853_collect-cpu-temp.pm.log 00:03:00.026 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728403853_collect-bmc-pm.bmc.pm.log 00:03:00.964 18:10:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:00.964 18:10:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:00.964 18:10:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:00.964 18:10:54 -- common/autotest_common.sh@10 -- # set +x 00:03:00.964 18:10:54 -- spdk/autotest.sh@59 -- # create_test_list 00:03:00.964 18:10:54 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:00.964 18:10:54 -- common/autotest_common.sh@10 -- # set +x 00:03:01.223 18:10:54 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:01.223 18:10:54 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:01.223 18:10:54 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:01.223 18:10:54 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:01.223 18:10:54 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:01.223 18:10:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:01.223 18:10:54 -- common/autotest_common.sh@1455 -- # uname 00:03:01.223 18:10:54 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:01.223 18:10:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:01.223 18:10:54 -- common/autotest_common.sh@1475 -- # uname 00:03:01.223 18:10:54 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:01.223 18:10:54 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:01.223 18:10:54 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:01.223 lcov: LCOV version 1.15 00:03:01.223 18:10:54 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:13.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:13.436 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:25.645 18:11:18 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:25.645 18:11:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:25.645 18:11:18 -- common/autotest_common.sh@10 -- # set +x 00:03:25.645 18:11:18 -- spdk/autotest.sh@78 -- # rm -f 00:03:25.645 18:11:18 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.181 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:28.441 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:28.441 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:28.441 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:28.441 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:28.441 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:28.441 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:28.441 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:28.441 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:28.441 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:28.441 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:28.441 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:28.441 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:28.700 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:28.700 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:28.700 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:28.700 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:28.700 18:11:21 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:28.700 18:11:21 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:28.700 18:11:21 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:28.700 18:11:21 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:28.700 18:11:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:28.700 18:11:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:28.700 18:11:21 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:28.700 18:11:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:28.700 18:11:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:28.700 18:11:21 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:28.700 18:11:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:28.700 18:11:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:28.700 18:11:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:28.700 18:11:21 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:28.700 18:11:21 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:28.700 No valid GPT data, bailing 00:03:28.700 18:11:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:28.700 18:11:21 -- scripts/common.sh@394 -- # pt= 00:03:28.700 18:11:21 -- scripts/common.sh@395 -- # return 1 00:03:28.700 18:11:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:28.700 1+0 records in 00:03:28.700 1+0 records out 00:03:28.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00183865 s, 570 MB/s 00:03:28.700 18:11:21 -- spdk/autotest.sh@105 -- # sync 00:03:28.700 18:11:21 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:28.700 18:11:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:28.700 18:11:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:35.270 18:11:27 -- spdk/autotest.sh@111 -- # uname -s 00:03:35.270 18:11:27 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:35.270 18:11:27 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:35.270 18:11:27 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:37.176 Hugepages 00:03:37.176 node hugesize free / total 00:03:37.176 node0 1048576kB 0 / 0 00:03:37.176 node0 2048kB 0 / 0 00:03:37.176 node1 1048576kB 0 / 0 00:03:37.176 node1 2048kB 0 / 0 00:03:37.176 00:03:37.176 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:37.176 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:37.176 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:37.176 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:37.176 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:37.176 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:37.176 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:37.176 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:37.176 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:37.176 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:37.176 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:37.176 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:37.176 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:37.176 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:37.176 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:37.176 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:37.176 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:37.176 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:37.435 18:11:30 -- spdk/autotest.sh@117 -- # uname -s 00:03:37.435 18:11:30 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:37.435 18:11:30 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:37.435 18:11:30 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:40.726 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:40.726 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:40.726 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:40.726 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:40.726 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:40.726 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:40.726 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:40.726 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:40.726 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:40.726 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:40.726 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:40.726 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:40.726 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:40.726 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:40.726 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:40.726 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:41.663 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:41.922 18:11:35 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:42.859 18:11:36 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:42.860 18:11:36 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:42.860 18:11:36 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:42.860 18:11:36 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:42.860 18:11:36 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:42.860 18:11:36 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:42.860 18:11:36 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:42.860 18:11:36 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:42.860 18:11:36 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:42.860 18:11:36 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:42.860 18:11:36 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:03:42.860 18:11:36 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.152 Waiting for block devices as requested 00:03:46.152 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:46.152 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:46.152 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:46.152 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:46.152 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:46.152 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:46.152 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:46.411 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:46.411 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:46.411 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:46.411 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:46.670 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:46.670 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:46.670 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:46.928 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:46.928 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:46.928 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:47.188 18:11:40 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:47.188 18:11:40 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:47.188 18:11:40 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:47.188 18:11:40 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:03:47.188 18:11:40 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:47.188 18:11:40 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:47.188 18:11:40 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:47.188 18:11:40 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:47.188 18:11:40 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:47.188 18:11:40 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:47.188 18:11:40 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:47.188 18:11:40 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:47.188 18:11:40 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:47.188 18:11:40 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:03:47.188 18:11:40 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:47.188 18:11:40 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:47.188 18:11:40 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:47.188 18:11:40 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:47.188 18:11:40 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:47.188 18:11:40 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:47.188 18:11:40 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:47.188 18:11:40 -- common/autotest_common.sh@1541 -- # continue 00:03:47.188 18:11:40 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:47.188 18:11:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:47.188 18:11:40 -- common/autotest_common.sh@10 -- # set +x 00:03:47.188 18:11:40 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:47.188 18:11:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:47.188 18:11:40 -- common/autotest_common.sh@10 -- # set +x 00:03:47.188 18:11:40 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.480 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:50.480 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:50.480 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:50.480 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:50.480 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:50.480 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:50.480 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:50.480 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:50.480 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:50.480 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:50.480 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:50.480 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:50.480 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:50.480 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:50.480 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:50.480 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:51.417 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:51.676 18:11:44 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:51.676 18:11:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:51.676 18:11:44 -- common/autotest_common.sh@10 -- # set +x 00:03:51.676 18:11:44 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:51.676 18:11:44 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:51.676 18:11:44 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:51.676 18:11:44 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:51.676 18:11:44 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:51.676 18:11:44 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:51.676 18:11:44 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:51.676 18:11:44 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:51.676 18:11:44 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:51.676 18:11:44 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:51.676 18:11:44 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:51.676 18:11:44 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:51.676 18:11:44 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:51.676 18:11:44 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:51.676 18:11:44 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:03:51.676 18:11:44 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:51.676 18:11:44 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:51.676 18:11:44 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:03:51.676 18:11:44 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:51.676 18:11:44 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:03:51.677 18:11:44 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:03:51.677 18:11:44 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:03:51.677 18:11:44 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:03:51.677 18:11:44 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=218846 00:03:51.677 18:11:44 -- common/autotest_common.sh@1583 -- # waitforlisten 218846 00:03:51.677 18:11:44 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.677 18:11:44 -- common/autotest_common.sh@831 -- # '[' -z 218846 ']' 00:03:51.677 18:11:44 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:51.677 18:11:44 -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:51.677 18:11:44 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:51.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:51.677 18:11:44 -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:51.677 18:11:44 -- common/autotest_common.sh@10 -- # set +x 00:03:51.677 [2024-10-08 18:11:44.957898] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:03:51.677 [2024-10-08 18:11:44.957945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid218846 ] 00:03:51.936 [2024-10-08 18:11:45.025516] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.936 [2024-10-08 18:11:45.104649] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.503 18:11:45 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:52.503 18:11:45 -- common/autotest_common.sh@864 -- # return 0 00:03:52.503 18:11:45 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:52.503 18:11:45 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:52.503 18:11:45 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:55.792 nvme0n1 00:03:55.792 18:11:48 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:55.792 [2024-10-08 18:11:48.966164] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:55.792 request: 00:03:55.792 { 00:03:55.792 "nvme_ctrlr_name": "nvme0", 00:03:55.792 "password": "test", 00:03:55.792 "method": "bdev_nvme_opal_revert", 00:03:55.792 "req_id": 1 00:03:55.792 } 00:03:55.792 Got JSON-RPC error response 00:03:55.792 response: 00:03:55.792 { 00:03:55.792 "code": -32602, 00:03:55.792 "message": "Invalid parameters" 00:03:55.792 } 00:03:55.792 18:11:48 -- common/autotest_common.sh@1589 -- # true 00:03:55.792 18:11:48 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:03:55.792 18:11:48 -- common/autotest_common.sh@1593 -- # killprocess 218846 00:03:55.792 18:11:48 -- common/autotest_common.sh@950 -- # '[' -z 218846 ']' 00:03:55.792 18:11:48 -- common/autotest_common.sh@954 -- # kill -0 218846 00:03:55.792 18:11:48 -- common/autotest_common.sh@955 -- # uname 00:03:55.792 18:11:48 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:55.792 18:11:48 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 218846 00:03:55.792 18:11:49 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:55.792 18:11:49 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:55.792 18:11:49 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 218846' 00:03:55.792 killing process with pid 218846 00:03:55.792 18:11:49 -- common/autotest_common.sh@969 -- # kill 218846 00:03:55.792 18:11:49 -- common/autotest_common.sh@974 -- # wait 218846 00:03:58.333 18:11:51 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:58.333 18:11:51 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:58.333 18:11:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:58.333 18:11:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:58.333 18:11:51 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:58.333 18:11:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:58.333 18:11:51 -- common/autotest_common.sh@10 -- # set +x 00:03:58.333 18:11:51 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:58.333 18:11:51 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:58.333 18:11:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.333 18:11:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.333 18:11:51 -- common/autotest_common.sh@10 -- # set +x 00:03:58.333 ************************************ 00:03:58.333 START TEST env 00:03:58.333 ************************************ 00:03:58.333 18:11:51 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:58.333 * Looking for test storage... 00:03:58.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:58.333 18:11:51 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:58.333 18:11:51 env -- common/autotest_common.sh@1681 -- # lcov --version 00:03:58.333 18:11:51 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:58.333 18:11:51 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:58.333 18:11:51 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.333 18:11:51 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.333 18:11:51 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.333 18:11:51 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.333 18:11:51 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.333 18:11:51 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.333 18:11:51 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.333 18:11:51 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.333 18:11:51 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.333 18:11:51 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.333 18:11:51 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.333 18:11:51 env -- scripts/common.sh@344 -- # case "$op" in 00:03:58.333 18:11:51 env -- scripts/common.sh@345 -- # : 1 00:03:58.333 18:11:51 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.333 18:11:51 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.333 18:11:51 env -- scripts/common.sh@365 -- # decimal 1 00:03:58.333 18:11:51 env -- scripts/common.sh@353 -- # local d=1 00:03:58.333 18:11:51 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.333 18:11:51 env -- scripts/common.sh@355 -- # echo 1 00:03:58.333 18:11:51 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.333 18:11:51 env -- scripts/common.sh@366 -- # decimal 2 00:03:58.333 18:11:51 env -- scripts/common.sh@353 -- # local d=2 00:03:58.333 18:11:51 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.333 18:11:51 env -- scripts/common.sh@355 -- # echo 2 00:03:58.333 18:11:51 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:58.333 18:11:51 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:58.333 18:11:51 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:58.333 18:11:51 env -- scripts/common.sh@368 -- # return 0 00:03:58.333 18:11:51 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.333 18:11:51 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:58.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.333 --rc genhtml_branch_coverage=1 00:03:58.333 --rc genhtml_function_coverage=1 00:03:58.333 --rc genhtml_legend=1 00:03:58.333 --rc geninfo_all_blocks=1 00:03:58.333 --rc geninfo_unexecuted_blocks=1 00:03:58.333 00:03:58.333 ' 00:03:58.333 18:11:51 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:58.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.333 --rc genhtml_branch_coverage=1 00:03:58.333 --rc genhtml_function_coverage=1 00:03:58.333 --rc genhtml_legend=1 00:03:58.333 --rc geninfo_all_blocks=1 00:03:58.333 --rc geninfo_unexecuted_blocks=1 00:03:58.333 00:03:58.333 ' 00:03:58.333 18:11:51 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:58.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.333 --rc genhtml_branch_coverage=1 00:03:58.333 --rc genhtml_function_coverage=1 00:03:58.333 --rc genhtml_legend=1 00:03:58.333 --rc geninfo_all_blocks=1 00:03:58.333 --rc geninfo_unexecuted_blocks=1 00:03:58.333 00:03:58.333 ' 00:03:58.333 18:11:51 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:58.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.333 --rc genhtml_branch_coverage=1 00:03:58.333 --rc genhtml_function_coverage=1 00:03:58.333 --rc genhtml_legend=1 00:03:58.333 --rc geninfo_all_blocks=1 00:03:58.333 --rc geninfo_unexecuted_blocks=1 00:03:58.333 00:03:58.333 ' 00:03:58.333 18:11:51 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:58.333 18:11:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.333 18:11:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.333 18:11:51 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.334 ************************************ 00:03:58.334 START TEST env_memory 00:03:58.334 ************************************ 00:03:58.334 18:11:51 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:58.334 00:03:58.334 00:03:58.334 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.334 http://cunit.sourceforge.net/ 00:03:58.334 00:03:58.334 00:03:58.334 Suite: memory 00:03:58.334 Test: alloc and free memory map ...[2024-10-08 18:11:51.434734] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:58.334 passed 00:03:58.334 Test: mem map translation ...[2024-10-08 18:11:51.452773] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:58.334 [2024-10-08 18:11:51.452786] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:58.334 [2024-10-08 18:11:51.452819] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:58.334 [2024-10-08 18:11:51.452825] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:58.334 passed 00:03:58.334 Test: mem map registration ...[2024-10-08 18:11:51.488589] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:58.334 [2024-10-08 18:11:51.488603] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:58.334 passed 00:03:58.334 Test: mem map adjacent registrations ...passed 00:03:58.334 00:03:58.334 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.334 suites 1 1 n/a 0 0 00:03:58.334 tests 4 4 4 0 0 00:03:58.334 asserts 152 152 152 0 n/a 00:03:58.334 00:03:58.334 Elapsed time = 0.133 seconds 00:03:58.334 00:03:58.334 real 0m0.146s 00:03:58.334 user 0m0.137s 00:03:58.334 sys 0m0.009s 00:03:58.334 18:11:51 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.334 18:11:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:58.334 ************************************ 00:03:58.334 END TEST env_memory 00:03:58.334 ************************************ 00:03:58.334 18:11:51 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:58.334 18:11:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.334 18:11:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.334 18:11:51 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.334 ************************************ 00:03:58.334 START TEST env_vtophys 00:03:58.334 ************************************ 00:03:58.334 18:11:51 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:58.334 EAL: lib.eal log level changed from notice to debug 00:03:58.334 EAL: Detected lcore 0 as core 0 on socket 0 00:03:58.334 EAL: Detected lcore 1 as core 1 on socket 0 00:03:58.334 EAL: Detected lcore 2 as core 2 on socket 0 00:03:58.334 EAL: Detected lcore 3 as core 3 on socket 0 00:03:58.334 EAL: Detected lcore 4 as core 4 on socket 0 00:03:58.334 EAL: Detected lcore 5 as core 5 on socket 0 00:03:58.334 EAL: Detected lcore 6 as core 6 on socket 0 00:03:58.334 EAL: Detected lcore 7 as core 8 on socket 0 00:03:58.334 EAL: Detected lcore 8 as core 9 on socket 0 00:03:58.334 EAL: Detected lcore 9 as core 10 on socket 0 00:03:58.334 EAL: Detected lcore 10 as core 11 on socket 0 00:03:58.334 EAL: Detected lcore 11 as core 12 on socket 0 00:03:58.334 EAL: Detected lcore 12 as core 13 on socket 0 00:03:58.334 EAL: Detected lcore 13 as core 16 on socket 0 00:03:58.334 EAL: Detected lcore 14 as core 17 on socket 0 00:03:58.334 EAL: Detected lcore 15 as core 18 on socket 0 00:03:58.334 EAL: Detected lcore 16 as core 19 on socket 0 00:03:58.334 EAL: Detected lcore 17 as core 20 on socket 0 00:03:58.334 EAL: Detected lcore 18 as core 21 on socket 0 00:03:58.334 EAL: Detected lcore 19 as core 25 on socket 0 00:03:58.334 EAL: Detected lcore 20 as core 26 on socket 0 00:03:58.334 EAL: Detected lcore 21 as core 27 on socket 0 00:03:58.334 EAL: Detected lcore 22 as core 28 on socket 0 00:03:58.334 EAL: Detected lcore 23 as core 29 on socket 0 00:03:58.334 EAL: Detected lcore 24 as core 0 on socket 1 00:03:58.334 EAL: Detected lcore 25 as core 1 on socket 1 00:03:58.334 EAL: Detected lcore 26 as core 2 on socket 1 00:03:58.334 EAL: Detected lcore 27 as core 3 on socket 1 00:03:58.334 EAL: Detected lcore 28 as core 4 on socket 1 00:03:58.334 EAL: Detected lcore 29 as core 5 on socket 1 00:03:58.334 EAL: Detected lcore 30 as core 6 on socket 1 00:03:58.334 EAL: Detected lcore 31 as core 8 on socket 1 00:03:58.334 EAL: Detected lcore 32 as core 10 on socket 1 00:03:58.334 EAL: Detected lcore 33 as core 11 on socket 1 00:03:58.334 EAL: Detected lcore 34 as core 12 on socket 1 00:03:58.334 EAL: Detected lcore 35 as core 13 on socket 1 00:03:58.334 EAL: Detected lcore 36 as core 16 on socket 1 00:03:58.334 EAL: Detected lcore 37 as core 17 on socket 1 00:03:58.334 EAL: Detected lcore 38 as core 18 on socket 1 00:03:58.334 EAL: Detected lcore 39 as core 19 on socket 1 00:03:58.334 EAL: Detected lcore 40 as core 20 on socket 1 00:03:58.334 EAL: Detected lcore 41 as core 21 on socket 1 00:03:58.334 EAL: Detected lcore 42 as core 24 on socket 1 00:03:58.334 EAL: Detected lcore 43 as core 25 on socket 1 00:03:58.334 EAL: Detected lcore 44 as core 26 on socket 1 00:03:58.334 EAL: Detected lcore 45 as core 27 on socket 1 00:03:58.334 EAL: Detected lcore 46 as core 28 on socket 1 00:03:58.334 EAL: Detected lcore 47 as core 29 on socket 1 00:03:58.334 EAL: Detected lcore 48 as core 0 on socket 0 00:03:58.334 EAL: Detected lcore 49 as core 1 on socket 0 00:03:58.334 EAL: Detected lcore 50 as core 2 on socket 0 00:03:58.334 EAL: Detected lcore 51 as core 3 on socket 0 00:03:58.334 EAL: Detected lcore 52 as core 4 on socket 0 00:03:58.334 EAL: Detected lcore 53 as core 5 on socket 0 00:03:58.334 EAL: Detected lcore 54 as core 6 on socket 0 00:03:58.334 EAL: Detected lcore 55 as core 8 on socket 0 00:03:58.334 EAL: Detected lcore 56 as core 9 on socket 0 00:03:58.334 EAL: Detected lcore 57 as core 10 on socket 0 00:03:58.334 EAL: Detected lcore 58 as core 11 on socket 0 00:03:58.334 EAL: Detected lcore 59 as core 12 on socket 0 00:03:58.334 EAL: Detected lcore 60 as core 13 on socket 0 00:03:58.334 EAL: Detected lcore 61 as core 16 on socket 0 00:03:58.334 EAL: Detected lcore 62 as core 17 on socket 0 00:03:58.334 EAL: Detected lcore 63 as core 18 on socket 0 00:03:58.334 EAL: Detected lcore 64 as core 19 on socket 0 00:03:58.334 EAL: Detected lcore 65 as core 20 on socket 0 00:03:58.334 EAL: Detected lcore 66 as core 21 on socket 0 00:03:58.334 EAL: Detected lcore 67 as core 25 on socket 0 00:03:58.334 EAL: Detected lcore 68 as core 26 on socket 0 00:03:58.334 EAL: Detected lcore 69 as core 27 on socket 0 00:03:58.334 EAL: Detected lcore 70 as core 28 on socket 0 00:03:58.334 EAL: Detected lcore 71 as core 29 on socket 0 00:03:58.334 EAL: Detected lcore 72 as core 0 on socket 1 00:03:58.334 EAL: Detected lcore 73 as core 1 on socket 1 00:03:58.334 EAL: Detected lcore 74 as core 2 on socket 1 00:03:58.334 EAL: Detected lcore 75 as core 3 on socket 1 00:03:58.334 EAL: Detected lcore 76 as core 4 on socket 1 00:03:58.334 EAL: Detected lcore 77 as core 5 on socket 1 00:03:58.334 EAL: Detected lcore 78 as core 6 on socket 1 00:03:58.334 EAL: Detected lcore 79 as core 8 on socket 1 00:03:58.334 EAL: Detected lcore 80 as core 10 on socket 1 00:03:58.334 EAL: Detected lcore 81 as core 11 on socket 1 00:03:58.334 EAL: Detected lcore 82 as core 12 on socket 1 00:03:58.334 EAL: Detected lcore 83 as core 13 on socket 1 00:03:58.334 EAL: Detected lcore 84 as core 16 on socket 1 00:03:58.334 EAL: Detected lcore 85 as core 17 on socket 1 00:03:58.334 EAL: Detected lcore 86 as core 18 on socket 1 00:03:58.334 EAL: Detected lcore 87 as core 19 on socket 1 00:03:58.334 EAL: Detected lcore 88 as core 20 on socket 1 00:03:58.334 EAL: Detected lcore 89 as core 21 on socket 1 00:03:58.334 EAL: Detected lcore 90 as core 24 on socket 1 00:03:58.334 EAL: Detected lcore 91 as core 25 on socket 1 00:03:58.334 EAL: Detected lcore 92 as core 26 on socket 1 00:03:58.334 EAL: Detected lcore 93 as core 27 on socket 1 00:03:58.334 EAL: Detected lcore 94 as core 28 on socket 1 00:03:58.334 EAL: Detected lcore 95 as core 29 on socket 1 00:03:58.334 EAL: Maximum logical cores by configuration: 128 00:03:58.334 EAL: Detected CPU lcores: 96 00:03:58.334 EAL: Detected NUMA nodes: 2 00:03:58.334 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:58.334 EAL: Detected shared linkage of DPDK 00:03:58.334 EAL: No shared files mode enabled, IPC will be disabled 00:03:58.693 EAL: Bus pci wants IOVA as 'DC' 00:03:58.693 EAL: Buses did not request a specific IOVA mode. 00:03:58.693 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:58.693 EAL: Selected IOVA mode 'VA' 00:03:58.693 EAL: Probing VFIO support... 00:03:58.693 EAL: IOMMU type 1 (Type 1) is supported 00:03:58.693 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:58.693 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:58.693 EAL: VFIO support initialized 00:03:58.693 EAL: Ask a virtual area of 0x2e000 bytes 00:03:58.693 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:58.693 EAL: Setting up physically contiguous memory... 00:03:58.693 EAL: Setting maximum number of open files to 524288 00:03:58.693 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:58.693 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:58.693 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:58.693 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.693 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:58.693 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.693 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.693 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:58.693 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:58.693 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.693 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:58.693 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.693 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.693 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:58.693 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:58.693 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.693 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:58.693 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.693 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.693 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:58.693 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:58.693 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.693 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:58.693 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.693 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.693 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:58.693 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:58.693 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:58.693 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.693 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:58.693 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:58.693 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.693 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:58.693 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:58.693 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.693 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:58.693 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:58.693 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.693 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:58.693 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:58.693 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.693 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:58.693 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:58.693 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.693 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:58.693 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:58.693 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.693 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:58.694 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:58.694 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.694 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:58.694 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:58.694 EAL: Hugepages will be freed exactly as allocated. 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: TSC frequency is ~2100000 KHz 00:03:58.694 EAL: Main lcore 0 is ready (tid=7faea29f4a00;cpuset=[0]) 00:03:58.694 EAL: Trying to obtain current memory policy. 00:03:58.694 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.694 EAL: Restoring previous memory policy: 0 00:03:58.694 EAL: request: mp_malloc_sync 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: Heap on socket 0 was expanded by 2MB 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:58.694 EAL: Mem event callback 'spdk:(nil)' registered 00:03:58.694 00:03:58.694 00:03:58.694 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.694 http://cunit.sourceforge.net/ 00:03:58.694 00:03:58.694 00:03:58.694 Suite: components_suite 00:03:58.694 Test: vtophys_malloc_test ...passed 00:03:58.694 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:58.694 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.694 EAL: Restoring previous memory policy: 4 00:03:58.694 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.694 EAL: request: mp_malloc_sync 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: Heap on socket 0 was expanded by 4MB 00:03:58.694 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.694 EAL: request: mp_malloc_sync 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: Heap on socket 0 was shrunk by 4MB 00:03:58.694 EAL: Trying to obtain current memory policy. 00:03:58.694 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.694 EAL: Restoring previous memory policy: 4 00:03:58.694 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.694 EAL: request: mp_malloc_sync 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: Heap on socket 0 was expanded by 6MB 00:03:58.694 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.694 EAL: request: mp_malloc_sync 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: Heap on socket 0 was shrunk by 6MB 00:03:58.694 EAL: Trying to obtain current memory policy. 00:03:58.694 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.694 EAL: Restoring previous memory policy: 4 00:03:58.694 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.694 EAL: request: mp_malloc_sync 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: Heap on socket 0 was expanded by 10MB 00:03:58.694 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.694 EAL: request: mp_malloc_sync 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: Heap on socket 0 was shrunk by 10MB 00:03:58.694 EAL: Trying to obtain current memory policy. 00:03:58.694 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.694 EAL: Restoring previous memory policy: 4 00:03:58.694 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.694 EAL: request: mp_malloc_sync 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: Heap on socket 0 was expanded by 18MB 00:03:58.694 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.694 EAL: request: mp_malloc_sync 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: Heap on socket 0 was shrunk by 18MB 00:03:58.694 EAL: Trying to obtain current memory policy. 00:03:58.694 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.694 EAL: Restoring previous memory policy: 4 00:03:58.694 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.694 EAL: request: mp_malloc_sync 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: Heap on socket 0 was expanded by 34MB 00:03:58.694 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.694 EAL: request: mp_malloc_sync 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: Heap on socket 0 was shrunk by 34MB 00:03:58.694 EAL: Trying to obtain current memory policy. 00:03:58.694 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.694 EAL: Restoring previous memory policy: 4 00:03:58.694 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.694 EAL: request: mp_malloc_sync 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: Heap on socket 0 was expanded by 66MB 00:03:58.694 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.694 EAL: request: mp_malloc_sync 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: Heap on socket 0 was shrunk by 66MB 00:03:58.694 EAL: Trying to obtain current memory policy. 00:03:58.694 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.694 EAL: Restoring previous memory policy: 4 00:03:58.694 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.694 EAL: request: mp_malloc_sync 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: Heap on socket 0 was expanded by 130MB 00:03:58.694 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.694 EAL: request: mp_malloc_sync 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: Heap on socket 0 was shrunk by 130MB 00:03:58.694 EAL: Trying to obtain current memory policy. 00:03:58.694 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.694 EAL: Restoring previous memory policy: 4 00:03:58.694 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.694 EAL: request: mp_malloc_sync 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: Heap on socket 0 was expanded by 258MB 00:03:58.694 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.694 EAL: request: mp_malloc_sync 00:03:58.694 EAL: No shared files mode enabled, IPC is disabled 00:03:58.694 EAL: Heap on socket 0 was shrunk by 258MB 00:03:58.694 EAL: Trying to obtain current memory policy. 00:03:58.694 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.023 EAL: Restoring previous memory policy: 4 00:03:59.023 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.023 EAL: request: mp_malloc_sync 00:03:59.023 EAL: No shared files mode enabled, IPC is disabled 00:03:59.023 EAL: Heap on socket 0 was expanded by 514MB 00:03:59.023 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.023 EAL: request: mp_malloc_sync 00:03:59.023 EAL: No shared files mode enabled, IPC is disabled 00:03:59.023 EAL: Heap on socket 0 was shrunk by 514MB 00:03:59.023 EAL: Trying to obtain current memory policy. 00:03:59.023 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.282 EAL: Restoring previous memory policy: 4 00:03:59.282 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.282 EAL: request: mp_malloc_sync 00:03:59.282 EAL: No shared files mode enabled, IPC is disabled 00:03:59.282 EAL: Heap on socket 0 was expanded by 1026MB 00:03:59.282 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.541 EAL: request: mp_malloc_sync 00:03:59.541 EAL: No shared files mode enabled, IPC is disabled 00:03:59.541 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:59.541 passed 00:03:59.541 00:03:59.541 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.541 suites 1 1 n/a 0 0 00:03:59.541 tests 2 2 2 0 0 00:03:59.541 asserts 497 497 497 0 n/a 00:03:59.541 00:03:59.541 Elapsed time = 0.966 seconds 00:03:59.541 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.541 EAL: request: mp_malloc_sync 00:03:59.541 EAL: No shared files mode enabled, IPC is disabled 00:03:59.541 EAL: Heap on socket 0 was shrunk by 2MB 00:03:59.541 EAL: No shared files mode enabled, IPC is disabled 00:03:59.541 EAL: No shared files mode enabled, IPC is disabled 00:03:59.541 EAL: No shared files mode enabled, IPC is disabled 00:03:59.541 00:03:59.541 real 0m1.089s 00:03:59.541 user 0m0.636s 00:03:59.541 sys 0m0.424s 00:03:59.541 18:11:52 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.541 18:11:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:59.541 ************************************ 00:03:59.541 END TEST env_vtophys 00:03:59.541 ************************************ 00:03:59.541 18:11:52 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:59.541 18:11:52 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:59.541 18:11:52 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:59.541 18:11:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.541 ************************************ 00:03:59.541 START TEST env_pci 00:03:59.541 ************************************ 00:03:59.541 18:11:52 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:59.541 00:03:59.541 00:03:59.541 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.541 http://cunit.sourceforge.net/ 00:03:59.541 00:03:59.541 00:03:59.541 Suite: pci 00:03:59.541 Test: pci_hook ...[2024-10-08 18:11:52.782731] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1111:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 220177 has claimed it 00:03:59.541 EAL: Cannot find device (10000:00:01.0) 00:03:59.541 EAL: Failed to attach device on primary process 00:03:59.541 passed 00:03:59.541 00:03:59.541 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.541 suites 1 1 n/a 0 0 00:03:59.541 tests 1 1 1 0 0 00:03:59.541 asserts 25 25 25 0 n/a 00:03:59.541 00:03:59.541 Elapsed time = 0.028 seconds 00:03:59.541 00:03:59.541 real 0m0.048s 00:03:59.541 user 0m0.012s 00:03:59.541 sys 0m0.036s 00:03:59.541 18:11:52 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.541 18:11:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:59.541 ************************************ 00:03:59.541 END TEST env_pci 00:03:59.541 ************************************ 00:03:59.541 18:11:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:59.541 18:11:52 env -- env/env.sh@15 -- # uname 00:03:59.541 18:11:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:59.541 18:11:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:59.542 18:11:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:59.542 18:11:52 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:59.542 18:11:52 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:59.542 18:11:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.801 ************************************ 00:03:59.801 START TEST env_dpdk_post_init 00:03:59.801 ************************************ 00:03:59.801 18:11:52 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:59.801 EAL: Detected CPU lcores: 96 00:03:59.801 EAL: Detected NUMA nodes: 2 00:03:59.801 EAL: Detected shared linkage of DPDK 00:03:59.801 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:59.801 EAL: Selected IOVA mode 'VA' 00:03:59.801 EAL: VFIO support initialized 00:03:59.801 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:59.801 EAL: Using IOMMU type 1 (Type 1) 00:03:59.801 EAL: Ignore mapping IO port bar(1) 00:03:59.801 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:59.801 EAL: Ignore mapping IO port bar(1) 00:03:59.801 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:59.801 EAL: Ignore mapping IO port bar(1) 00:03:59.801 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:59.801 EAL: Ignore mapping IO port bar(1) 00:03:59.801 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:59.801 EAL: Ignore mapping IO port bar(1) 00:03:59.801 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:59.801 EAL: Ignore mapping IO port bar(1) 00:03:59.801 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:59.801 EAL: Ignore mapping IO port bar(1) 00:03:59.801 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:59.801 EAL: Ignore mapping IO port bar(1) 00:03:59.801 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:00.740 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:00.740 EAL: Ignore mapping IO port bar(1) 00:04:00.740 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:00.740 EAL: Ignore mapping IO port bar(1) 00:04:00.740 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:00.740 EAL: Ignore mapping IO port bar(1) 00:04:00.740 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:00.740 EAL: Ignore mapping IO port bar(1) 00:04:00.740 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:00.740 EAL: Ignore mapping IO port bar(1) 00:04:00.740 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:00.740 EAL: Ignore mapping IO port bar(1) 00:04:00.740 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:00.740 EAL: Ignore mapping IO port bar(1) 00:04:00.740 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:00.740 EAL: Ignore mapping IO port bar(1) 00:04:00.740 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:04.931 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:04.931 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:04.931 Starting DPDK initialization... 00:04:04.931 Starting SPDK post initialization... 00:04:04.931 SPDK NVMe probe 00:04:04.931 Attaching to 0000:5e:00.0 00:04:04.931 Attached to 0000:5e:00.0 00:04:04.931 Cleaning up... 00:04:04.931 00:04:04.931 real 0m4.899s 00:04:04.931 user 0m3.452s 00:04:04.931 sys 0m0.518s 00:04:04.931 18:11:57 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.931 18:11:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.931 ************************************ 00:04:04.931 END TEST env_dpdk_post_init 00:04:04.931 ************************************ 00:04:04.931 18:11:57 env -- env/env.sh@26 -- # uname 00:04:04.931 18:11:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:04.931 18:11:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:04.931 18:11:57 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.931 18:11:57 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.931 18:11:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.931 ************************************ 00:04:04.931 START TEST env_mem_callbacks 00:04:04.931 ************************************ 00:04:04.931 18:11:57 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:04.931 EAL: Detected CPU lcores: 96 00:04:04.931 EAL: Detected NUMA nodes: 2 00:04:04.931 EAL: Detected shared linkage of DPDK 00:04:04.931 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:04.931 EAL: Selected IOVA mode 'VA' 00:04:04.931 EAL: VFIO support initialized 00:04:04.931 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:04.931 00:04:04.931 00:04:04.931 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.931 http://cunit.sourceforge.net/ 00:04:04.931 00:04:04.931 00:04:04.931 Suite: memory 00:04:04.931 Test: test ... 00:04:04.931 register 0x200000200000 2097152 00:04:04.931 malloc 3145728 00:04:04.931 register 0x200000400000 4194304 00:04:04.931 buf 0x200000500000 len 3145728 PASSED 00:04:04.931 malloc 64 00:04:04.931 buf 0x2000004fff40 len 64 PASSED 00:04:04.931 malloc 4194304 00:04:04.931 register 0x200000800000 6291456 00:04:04.931 buf 0x200000a00000 len 4194304 PASSED 00:04:04.931 free 0x200000500000 3145728 00:04:04.931 free 0x2000004fff40 64 00:04:04.931 unregister 0x200000400000 4194304 PASSED 00:04:04.931 free 0x200000a00000 4194304 00:04:04.931 unregister 0x200000800000 6291456 PASSED 00:04:04.931 malloc 8388608 00:04:04.931 register 0x200000400000 10485760 00:04:04.931 buf 0x200000600000 len 8388608 PASSED 00:04:04.931 free 0x200000600000 8388608 00:04:04.931 unregister 0x200000400000 10485760 PASSED 00:04:04.931 passed 00:04:04.931 00:04:04.931 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.931 suites 1 1 n/a 0 0 00:04:04.931 tests 1 1 1 0 0 00:04:04.931 asserts 15 15 15 0 n/a 00:04:04.931 00:04:04.931 Elapsed time = 0.008 seconds 00:04:04.931 00:04:04.931 real 0m0.061s 00:04:04.931 user 0m0.021s 00:04:04.931 sys 0m0.040s 00:04:04.931 18:11:57 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.931 18:11:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:04.931 ************************************ 00:04:04.931 END TEST env_mem_callbacks 00:04:04.931 ************************************ 00:04:04.931 00:04:04.931 real 0m6.771s 00:04:04.931 user 0m4.482s 00:04:04.931 sys 0m1.366s 00:04:04.931 18:11:57 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.931 18:11:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.931 ************************************ 00:04:04.931 END TEST env 00:04:04.931 ************************************ 00:04:04.931 18:11:57 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:04.931 18:11:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.931 18:11:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.931 18:11:57 -- common/autotest_common.sh@10 -- # set +x 00:04:04.931 ************************************ 00:04:04.931 START TEST rpc 00:04:04.931 ************************************ 00:04:04.931 18:11:58 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:04.931 * Looking for test storage... 00:04:04.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:04.931 18:11:58 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:04.931 18:11:58 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:04.931 18:11:58 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:04.931 18:11:58 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:04.931 18:11:58 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.931 18:11:58 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.931 18:11:58 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.931 18:11:58 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.931 18:11:58 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.931 18:11:58 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.931 18:11:58 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.931 18:11:58 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.931 18:11:58 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.931 18:11:58 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.931 18:11:58 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.931 18:11:58 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:04.931 18:11:58 rpc -- scripts/common.sh@345 -- # : 1 00:04:04.931 18:11:58 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.931 18:11:58 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.931 18:11:58 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:04.931 18:11:58 rpc -- scripts/common.sh@353 -- # local d=1 00:04:04.931 18:11:58 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.931 18:11:58 rpc -- scripts/common.sh@355 -- # echo 1 00:04:04.931 18:11:58 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.931 18:11:58 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:04.931 18:11:58 rpc -- scripts/common.sh@353 -- # local d=2 00:04:04.931 18:11:58 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.931 18:11:58 rpc -- scripts/common.sh@355 -- # echo 2 00:04:04.931 18:11:58 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.931 18:11:58 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.931 18:11:58 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.931 18:11:58 rpc -- scripts/common.sh@368 -- # return 0 00:04:04.931 18:11:58 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.931 18:11:58 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:04.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.931 --rc genhtml_branch_coverage=1 00:04:04.931 --rc genhtml_function_coverage=1 00:04:04.931 --rc genhtml_legend=1 00:04:04.931 --rc geninfo_all_blocks=1 00:04:04.931 --rc geninfo_unexecuted_blocks=1 00:04:04.931 00:04:04.931 ' 00:04:04.931 18:11:58 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:04.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.931 --rc genhtml_branch_coverage=1 00:04:04.931 --rc genhtml_function_coverage=1 00:04:04.931 --rc genhtml_legend=1 00:04:04.931 --rc geninfo_all_blocks=1 00:04:04.931 --rc geninfo_unexecuted_blocks=1 00:04:04.931 00:04:04.931 ' 00:04:04.931 18:11:58 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:04.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.931 --rc genhtml_branch_coverage=1 00:04:04.931 --rc genhtml_function_coverage=1 00:04:04.931 --rc genhtml_legend=1 00:04:04.931 --rc geninfo_all_blocks=1 00:04:04.931 --rc geninfo_unexecuted_blocks=1 00:04:04.931 00:04:04.931 ' 00:04:04.931 18:11:58 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:04.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.931 --rc genhtml_branch_coverage=1 00:04:04.931 --rc genhtml_function_coverage=1 00:04:04.931 --rc genhtml_legend=1 00:04:04.931 --rc geninfo_all_blocks=1 00:04:04.931 --rc geninfo_unexecuted_blocks=1 00:04:04.931 00:04:04.931 ' 00:04:04.931 18:11:58 rpc -- rpc/rpc.sh@65 -- # spdk_pid=221223 00:04:04.931 18:11:58 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:04.931 18:11:58 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.931 18:11:58 rpc -- rpc/rpc.sh@67 -- # waitforlisten 221223 00:04:04.931 18:11:58 rpc -- common/autotest_common.sh@831 -- # '[' -z 221223 ']' 00:04:04.931 18:11:58 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.931 18:11:58 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:04.931 18:11:58 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.931 18:11:58 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:04.931 18:11:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.191 [2024-10-08 18:11:58.257661] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:04:05.191 [2024-10-08 18:11:58.257708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221223 ] 00:04:05.191 [2024-10-08 18:11:58.313155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.191 [2024-10-08 18:11:58.391146] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:05.191 [2024-10-08 18:11:58.391183] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 221223' to capture a snapshot of events at runtime. 00:04:05.191 [2024-10-08 18:11:58.391190] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:05.191 [2024-10-08 18:11:58.391196] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:05.191 [2024-10-08 18:11:58.391202] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid221223 for offline analysis/debug. 00:04:05.191 [2024-10-08 18:11:58.391751] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.128 18:11:59 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:06.128 18:11:59 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:06.128 18:11:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.128 18:11:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.128 18:11:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:06.128 18:11:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:06.128 18:11:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.128 18:11:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.128 18:11:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.128 ************************************ 00:04:06.128 START TEST rpc_integrity 00:04:06.128 ************************************ 00:04:06.128 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:06.128 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:06.128 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.128 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.128 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.128 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:06.128 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:06.128 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:06.128 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:06.128 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.128 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.128 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.128 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:06.128 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:06.128 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.128 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.128 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.128 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:06.128 { 00:04:06.128 "name": "Malloc0", 00:04:06.128 "aliases": [ 00:04:06.128 "acf02557-985c-45d0-891d-670bb82d1668" 00:04:06.128 ], 00:04:06.128 "product_name": "Malloc disk", 00:04:06.128 "block_size": 512, 00:04:06.128 "num_blocks": 16384, 00:04:06.128 "uuid": "acf02557-985c-45d0-891d-670bb82d1668", 00:04:06.128 "assigned_rate_limits": { 00:04:06.128 "rw_ios_per_sec": 0, 00:04:06.128 "rw_mbytes_per_sec": 0, 00:04:06.128 "r_mbytes_per_sec": 0, 00:04:06.128 "w_mbytes_per_sec": 0 00:04:06.128 }, 00:04:06.128 "claimed": false, 00:04:06.128 "zoned": false, 00:04:06.128 "supported_io_types": { 00:04:06.128 "read": true, 00:04:06.128 "write": true, 00:04:06.128 "unmap": true, 00:04:06.128 "flush": true, 00:04:06.128 "reset": true, 00:04:06.128 "nvme_admin": false, 00:04:06.128 "nvme_io": false, 00:04:06.128 "nvme_io_md": false, 00:04:06.128 "write_zeroes": true, 00:04:06.128 "zcopy": true, 00:04:06.128 "get_zone_info": false, 00:04:06.128 "zone_management": false, 00:04:06.128 "zone_append": false, 00:04:06.128 "compare": false, 00:04:06.128 "compare_and_write": false, 00:04:06.128 "abort": true, 00:04:06.128 "seek_hole": false, 00:04:06.128 "seek_data": false, 00:04:06.128 "copy": true, 00:04:06.128 "nvme_iov_md": false 00:04:06.128 }, 00:04:06.128 "memory_domains": [ 00:04:06.128 { 00:04:06.128 "dma_device_id": "system", 00:04:06.128 "dma_device_type": 1 00:04:06.128 }, 00:04:06.128 { 00:04:06.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.128 "dma_device_type": 2 00:04:06.128 } 00:04:06.128 ], 00:04:06.128 "driver_specific": {} 00:04:06.128 } 00:04:06.128 ]' 00:04:06.128 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:06.128 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:06.128 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:06.128 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.128 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.128 [2024-10-08 18:11:59.242410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:06.128 [2024-10-08 18:11:59.242442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:06.128 [2024-10-08 18:11:59.242456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1305790 00:04:06.128 [2024-10-08 18:11:59.242463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:06.128 [2024-10-08 18:11:59.243583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:06.128 [2024-10-08 18:11:59.243604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:06.128 Passthru0 00:04:06.128 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.128 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:06.128 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.128 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.128 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.128 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:06.128 { 00:04:06.128 "name": "Malloc0", 00:04:06.128 "aliases": [ 00:04:06.128 "acf02557-985c-45d0-891d-670bb82d1668" 00:04:06.128 ], 00:04:06.128 "product_name": "Malloc disk", 00:04:06.128 "block_size": 512, 00:04:06.128 "num_blocks": 16384, 00:04:06.128 "uuid": "acf02557-985c-45d0-891d-670bb82d1668", 00:04:06.128 "assigned_rate_limits": { 00:04:06.128 "rw_ios_per_sec": 0, 00:04:06.128 "rw_mbytes_per_sec": 0, 00:04:06.128 "r_mbytes_per_sec": 0, 00:04:06.128 "w_mbytes_per_sec": 0 00:04:06.128 }, 00:04:06.128 "claimed": true, 00:04:06.128 "claim_type": "exclusive_write", 00:04:06.128 "zoned": false, 00:04:06.128 "supported_io_types": { 00:04:06.128 "read": true, 00:04:06.128 "write": true, 00:04:06.128 "unmap": true, 00:04:06.128 "flush": true, 00:04:06.128 "reset": true, 00:04:06.128 "nvme_admin": false, 00:04:06.128 "nvme_io": false, 00:04:06.128 "nvme_io_md": false, 00:04:06.128 "write_zeroes": true, 00:04:06.128 "zcopy": true, 00:04:06.128 "get_zone_info": false, 00:04:06.128 "zone_management": false, 00:04:06.128 "zone_append": false, 00:04:06.128 "compare": false, 00:04:06.128 "compare_and_write": false, 00:04:06.128 "abort": true, 00:04:06.128 "seek_hole": false, 00:04:06.128 "seek_data": false, 00:04:06.128 "copy": true, 00:04:06.128 "nvme_iov_md": false 00:04:06.128 }, 00:04:06.128 "memory_domains": [ 00:04:06.128 { 00:04:06.128 "dma_device_id": "system", 00:04:06.128 "dma_device_type": 1 00:04:06.128 }, 00:04:06.128 { 00:04:06.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.128 "dma_device_type": 2 00:04:06.128 } 00:04:06.128 ], 00:04:06.128 "driver_specific": {} 00:04:06.128 }, 00:04:06.128 { 00:04:06.128 "name": "Passthru0", 00:04:06.128 "aliases": [ 00:04:06.128 "e99cedd4-1818-5588-b105-3a309a5759bd" 00:04:06.128 ], 00:04:06.128 "product_name": "passthru", 00:04:06.128 "block_size": 512, 00:04:06.128 "num_blocks": 16384, 00:04:06.128 "uuid": "e99cedd4-1818-5588-b105-3a309a5759bd", 00:04:06.128 "assigned_rate_limits": { 00:04:06.128 "rw_ios_per_sec": 0, 00:04:06.128 "rw_mbytes_per_sec": 0, 00:04:06.128 "r_mbytes_per_sec": 0, 00:04:06.128 "w_mbytes_per_sec": 0 00:04:06.128 }, 00:04:06.128 "claimed": false, 00:04:06.128 "zoned": false, 00:04:06.128 "supported_io_types": { 00:04:06.128 "read": true, 00:04:06.128 "write": true, 00:04:06.128 "unmap": true, 00:04:06.128 "flush": true, 00:04:06.128 "reset": true, 00:04:06.128 "nvme_admin": false, 00:04:06.128 "nvme_io": false, 00:04:06.128 "nvme_io_md": false, 00:04:06.128 "write_zeroes": true, 00:04:06.128 "zcopy": true, 00:04:06.128 "get_zone_info": false, 00:04:06.128 "zone_management": false, 00:04:06.128 "zone_append": false, 00:04:06.128 "compare": false, 00:04:06.128 "compare_and_write": false, 00:04:06.128 "abort": true, 00:04:06.128 "seek_hole": false, 00:04:06.128 "seek_data": false, 00:04:06.128 "copy": true, 00:04:06.128 "nvme_iov_md": false 00:04:06.128 }, 00:04:06.128 "memory_domains": [ 00:04:06.128 { 00:04:06.128 "dma_device_id": "system", 00:04:06.128 "dma_device_type": 1 00:04:06.128 }, 00:04:06.128 { 00:04:06.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.128 "dma_device_type": 2 00:04:06.128 } 00:04:06.128 ], 00:04:06.128 "driver_specific": { 00:04:06.128 "passthru": { 00:04:06.128 "name": "Passthru0", 00:04:06.129 "base_bdev_name": "Malloc0" 00:04:06.129 } 00:04:06.129 } 00:04:06.129 } 00:04:06.129 ]' 00:04:06.129 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:06.129 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:06.129 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:06.129 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.129 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.129 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.129 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:06.129 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.129 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.129 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.129 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:06.129 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.129 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.129 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.129 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:06.129 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:06.129 18:11:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:06.129 00:04:06.129 real 0m0.279s 00:04:06.129 user 0m0.166s 00:04:06.129 sys 0m0.044s 00:04:06.129 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.129 18:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.129 ************************************ 00:04:06.129 END TEST rpc_integrity 00:04:06.129 ************************************ 00:04:06.129 18:11:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:06.129 18:11:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.129 18:11:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.129 18:11:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.388 ************************************ 00:04:06.388 START TEST rpc_plugins 00:04:06.388 ************************************ 00:04:06.388 18:11:59 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:06.388 18:11:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:06.388 18:11:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.388 18:11:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.388 18:11:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.388 18:11:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:06.388 18:11:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:06.388 18:11:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.388 18:11:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.388 18:11:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.388 18:11:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:06.388 { 00:04:06.388 "name": "Malloc1", 00:04:06.388 "aliases": [ 00:04:06.388 "9d2cf782-dc0e-4c55-807c-8c6d2e93f101" 00:04:06.388 ], 00:04:06.388 "product_name": "Malloc disk", 00:04:06.388 "block_size": 4096, 00:04:06.388 "num_blocks": 256, 00:04:06.388 "uuid": "9d2cf782-dc0e-4c55-807c-8c6d2e93f101", 00:04:06.388 "assigned_rate_limits": { 00:04:06.388 "rw_ios_per_sec": 0, 00:04:06.388 "rw_mbytes_per_sec": 0, 00:04:06.388 "r_mbytes_per_sec": 0, 00:04:06.388 "w_mbytes_per_sec": 0 00:04:06.388 }, 00:04:06.388 "claimed": false, 00:04:06.388 "zoned": false, 00:04:06.388 "supported_io_types": { 00:04:06.388 "read": true, 00:04:06.388 "write": true, 00:04:06.388 "unmap": true, 00:04:06.388 "flush": true, 00:04:06.388 "reset": true, 00:04:06.388 "nvme_admin": false, 00:04:06.388 "nvme_io": false, 00:04:06.388 "nvme_io_md": false, 00:04:06.388 "write_zeroes": true, 00:04:06.388 "zcopy": true, 00:04:06.388 "get_zone_info": false, 00:04:06.388 "zone_management": false, 00:04:06.388 "zone_append": false, 00:04:06.388 "compare": false, 00:04:06.388 "compare_and_write": false, 00:04:06.388 "abort": true, 00:04:06.388 "seek_hole": false, 00:04:06.388 "seek_data": false, 00:04:06.388 "copy": true, 00:04:06.388 "nvme_iov_md": false 00:04:06.388 }, 00:04:06.388 "memory_domains": [ 00:04:06.388 { 00:04:06.388 "dma_device_id": "system", 00:04:06.388 "dma_device_type": 1 00:04:06.388 }, 00:04:06.388 { 00:04:06.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.388 "dma_device_type": 2 00:04:06.388 } 00:04:06.388 ], 00:04:06.388 "driver_specific": {} 00:04:06.388 } 00:04:06.388 ]' 00:04:06.388 18:11:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:06.388 18:11:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:06.388 18:11:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:06.388 18:11:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.388 18:11:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.388 18:11:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.388 18:11:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:06.388 18:11:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.388 18:11:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.388 18:11:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.388 18:11:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:06.388 18:11:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:06.388 18:11:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:06.388 00:04:06.388 real 0m0.139s 00:04:06.388 user 0m0.082s 00:04:06.388 sys 0m0.020s 00:04:06.388 18:11:59 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.388 18:11:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.388 ************************************ 00:04:06.388 END TEST rpc_plugins 00:04:06.388 ************************************ 00:04:06.388 18:11:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:06.388 18:11:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.388 18:11:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.388 18:11:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.388 ************************************ 00:04:06.388 START TEST rpc_trace_cmd_test 00:04:06.388 ************************************ 00:04:06.388 18:11:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:06.388 18:11:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:06.388 18:11:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:06.388 18:11:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.388 18:11:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:06.388 18:11:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.388 18:11:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:06.388 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid221223", 00:04:06.388 "tpoint_group_mask": "0x8", 00:04:06.388 "iscsi_conn": { 00:04:06.388 "mask": "0x2", 00:04:06.388 "tpoint_mask": "0x0" 00:04:06.388 }, 00:04:06.388 "scsi": { 00:04:06.388 "mask": "0x4", 00:04:06.388 "tpoint_mask": "0x0" 00:04:06.388 }, 00:04:06.388 "bdev": { 00:04:06.388 "mask": "0x8", 00:04:06.388 "tpoint_mask": "0xffffffffffffffff" 00:04:06.388 }, 00:04:06.388 "nvmf_rdma": { 00:04:06.388 "mask": "0x10", 00:04:06.388 "tpoint_mask": "0x0" 00:04:06.388 }, 00:04:06.388 "nvmf_tcp": { 00:04:06.388 "mask": "0x20", 00:04:06.388 "tpoint_mask": "0x0" 00:04:06.388 }, 00:04:06.388 "ftl": { 00:04:06.388 "mask": "0x40", 00:04:06.388 "tpoint_mask": "0x0" 00:04:06.388 }, 00:04:06.388 "blobfs": { 00:04:06.388 "mask": "0x80", 00:04:06.388 "tpoint_mask": "0x0" 00:04:06.388 }, 00:04:06.388 "dsa": { 00:04:06.388 "mask": "0x200", 00:04:06.388 "tpoint_mask": "0x0" 00:04:06.388 }, 00:04:06.388 "thread": { 00:04:06.388 "mask": "0x400", 00:04:06.388 "tpoint_mask": "0x0" 00:04:06.388 }, 00:04:06.388 "nvme_pcie": { 00:04:06.388 "mask": "0x800", 00:04:06.388 "tpoint_mask": "0x0" 00:04:06.388 }, 00:04:06.388 "iaa": { 00:04:06.388 "mask": "0x1000", 00:04:06.388 "tpoint_mask": "0x0" 00:04:06.388 }, 00:04:06.388 "nvme_tcp": { 00:04:06.388 "mask": "0x2000", 00:04:06.388 "tpoint_mask": "0x0" 00:04:06.388 }, 00:04:06.388 "bdev_nvme": { 00:04:06.388 "mask": "0x4000", 00:04:06.388 "tpoint_mask": "0x0" 00:04:06.388 }, 00:04:06.388 "sock": { 00:04:06.388 "mask": "0x8000", 00:04:06.388 "tpoint_mask": "0x0" 00:04:06.388 }, 00:04:06.388 "blob": { 00:04:06.388 "mask": "0x10000", 00:04:06.388 "tpoint_mask": "0x0" 00:04:06.388 }, 00:04:06.388 "bdev_raid": { 00:04:06.388 "mask": "0x20000", 00:04:06.388 "tpoint_mask": "0x0" 00:04:06.388 }, 00:04:06.388 "scheduler": { 00:04:06.388 "mask": "0x40000", 00:04:06.388 "tpoint_mask": "0x0" 00:04:06.388 } 00:04:06.388 }' 00:04:06.388 18:11:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:06.647 18:11:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:06.647 18:11:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:06.647 18:11:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:06.647 18:11:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:06.647 18:11:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:06.647 18:11:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:06.647 18:11:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:06.647 18:11:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:06.647 18:11:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:06.647 00:04:06.647 real 0m0.195s 00:04:06.647 user 0m0.164s 00:04:06.647 sys 0m0.022s 00:04:06.647 18:11:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.647 18:11:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:06.647 ************************************ 00:04:06.647 END TEST rpc_trace_cmd_test 00:04:06.647 ************************************ 00:04:06.647 18:11:59 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:06.647 18:11:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:06.647 18:11:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:06.647 18:11:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.647 18:11:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.647 18:11:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.647 ************************************ 00:04:06.647 START TEST rpc_daemon_integrity 00:04:06.647 ************************************ 00:04:06.647 18:11:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:06.647 18:11:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:06.647 18:11:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.647 18:11:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.647 18:11:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.647 18:11:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:06.647 18:11:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:06.907 18:11:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:06.907 18:11:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:06.907 18:11:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.907 18:11:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.907 18:11:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.907 18:11:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:06.907 18:11:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:06.907 18:11:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.907 18:11:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:06.907 { 00:04:06.907 "name": "Malloc2", 00:04:06.907 "aliases": [ 00:04:06.907 "611083a2-d53b-45bd-8fc3-e2d3c574b2d7" 00:04:06.907 ], 00:04:06.907 "product_name": "Malloc disk", 00:04:06.907 "block_size": 512, 00:04:06.907 "num_blocks": 16384, 00:04:06.907 "uuid": "611083a2-d53b-45bd-8fc3-e2d3c574b2d7", 00:04:06.907 "assigned_rate_limits": { 00:04:06.907 "rw_ios_per_sec": 0, 00:04:06.907 "rw_mbytes_per_sec": 0, 00:04:06.907 "r_mbytes_per_sec": 0, 00:04:06.907 "w_mbytes_per_sec": 0 00:04:06.907 }, 00:04:06.907 "claimed": false, 00:04:06.907 "zoned": false, 00:04:06.907 "supported_io_types": { 00:04:06.907 "read": true, 00:04:06.907 "write": true, 00:04:06.907 "unmap": true, 00:04:06.907 "flush": true, 00:04:06.907 "reset": true, 00:04:06.907 "nvme_admin": false, 00:04:06.907 "nvme_io": false, 00:04:06.907 "nvme_io_md": false, 00:04:06.907 "write_zeroes": true, 00:04:06.907 "zcopy": true, 00:04:06.907 "get_zone_info": false, 00:04:06.907 "zone_management": false, 00:04:06.907 "zone_append": false, 00:04:06.907 "compare": false, 00:04:06.907 "compare_and_write": false, 00:04:06.907 "abort": true, 00:04:06.907 "seek_hole": false, 00:04:06.907 "seek_data": false, 00:04:06.907 "copy": true, 00:04:06.907 "nvme_iov_md": false 00:04:06.907 }, 00:04:06.907 "memory_domains": [ 00:04:06.907 { 00:04:06.907 "dma_device_id": "system", 00:04:06.907 "dma_device_type": 1 00:04:06.907 }, 00:04:06.907 { 00:04:06.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.907 "dma_device_type": 2 00:04:06.907 } 00:04:06.907 ], 00:04:06.907 "driver_specific": {} 00:04:06.907 } 00:04:06.907 ]' 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.907 [2024-10-08 18:12:00.060628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:06.907 [2024-10-08 18:12:00.060661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:06.907 [2024-10-08 18:12:00.060677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1306330 00:04:06.907 [2024-10-08 18:12:00.060684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:06.907 [2024-10-08 18:12:00.061920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:06.907 [2024-10-08 18:12:00.061943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:06.907 Passthru0 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:06.907 { 00:04:06.907 "name": "Malloc2", 00:04:06.907 "aliases": [ 00:04:06.907 "611083a2-d53b-45bd-8fc3-e2d3c574b2d7" 00:04:06.907 ], 00:04:06.907 "product_name": "Malloc disk", 00:04:06.907 "block_size": 512, 00:04:06.907 "num_blocks": 16384, 00:04:06.907 "uuid": "611083a2-d53b-45bd-8fc3-e2d3c574b2d7", 00:04:06.907 "assigned_rate_limits": { 00:04:06.907 "rw_ios_per_sec": 0, 00:04:06.907 "rw_mbytes_per_sec": 0, 00:04:06.907 "r_mbytes_per_sec": 0, 00:04:06.907 "w_mbytes_per_sec": 0 00:04:06.907 }, 00:04:06.907 "claimed": true, 00:04:06.907 "claim_type": "exclusive_write", 00:04:06.907 "zoned": false, 00:04:06.907 "supported_io_types": { 00:04:06.907 "read": true, 00:04:06.907 "write": true, 00:04:06.907 "unmap": true, 00:04:06.907 "flush": true, 00:04:06.907 "reset": true, 00:04:06.907 "nvme_admin": false, 00:04:06.907 "nvme_io": false, 00:04:06.907 "nvme_io_md": false, 00:04:06.907 "write_zeroes": true, 00:04:06.907 "zcopy": true, 00:04:06.907 "get_zone_info": false, 00:04:06.907 "zone_management": false, 00:04:06.907 "zone_append": false, 00:04:06.907 "compare": false, 00:04:06.907 "compare_and_write": false, 00:04:06.907 "abort": true, 00:04:06.907 "seek_hole": false, 00:04:06.907 "seek_data": false, 00:04:06.907 "copy": true, 00:04:06.907 "nvme_iov_md": false 00:04:06.907 }, 00:04:06.907 "memory_domains": [ 00:04:06.907 { 00:04:06.907 "dma_device_id": "system", 00:04:06.907 "dma_device_type": 1 00:04:06.907 }, 00:04:06.907 { 00:04:06.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.907 "dma_device_type": 2 00:04:06.907 } 00:04:06.907 ], 00:04:06.907 "driver_specific": {} 00:04:06.907 }, 00:04:06.907 { 00:04:06.907 "name": "Passthru0", 00:04:06.907 "aliases": [ 00:04:06.907 "ccf4b243-9619-57c4-816a-5b373161f9bc" 00:04:06.907 ], 00:04:06.907 "product_name": "passthru", 00:04:06.907 "block_size": 512, 00:04:06.907 "num_blocks": 16384, 00:04:06.907 "uuid": "ccf4b243-9619-57c4-816a-5b373161f9bc", 00:04:06.907 "assigned_rate_limits": { 00:04:06.907 "rw_ios_per_sec": 0, 00:04:06.907 "rw_mbytes_per_sec": 0, 00:04:06.907 "r_mbytes_per_sec": 0, 00:04:06.907 "w_mbytes_per_sec": 0 00:04:06.907 }, 00:04:06.907 "claimed": false, 00:04:06.907 "zoned": false, 00:04:06.907 "supported_io_types": { 00:04:06.907 "read": true, 00:04:06.907 "write": true, 00:04:06.907 "unmap": true, 00:04:06.907 "flush": true, 00:04:06.907 "reset": true, 00:04:06.907 "nvme_admin": false, 00:04:06.907 "nvme_io": false, 00:04:06.907 "nvme_io_md": false, 00:04:06.907 "write_zeroes": true, 00:04:06.907 "zcopy": true, 00:04:06.907 "get_zone_info": false, 00:04:06.907 "zone_management": false, 00:04:06.907 "zone_append": false, 00:04:06.907 "compare": false, 00:04:06.907 "compare_and_write": false, 00:04:06.907 "abort": true, 00:04:06.907 "seek_hole": false, 00:04:06.907 "seek_data": false, 00:04:06.907 "copy": true, 00:04:06.907 "nvme_iov_md": false 00:04:06.907 }, 00:04:06.907 "memory_domains": [ 00:04:06.907 { 00:04:06.907 "dma_device_id": "system", 00:04:06.907 "dma_device_type": 1 00:04:06.907 }, 00:04:06.907 { 00:04:06.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.907 "dma_device_type": 2 00:04:06.907 } 00:04:06.907 ], 00:04:06.907 "driver_specific": { 00:04:06.907 "passthru": { 00:04:06.907 "name": "Passthru0", 00:04:06.907 "base_bdev_name": "Malloc2" 00:04:06.907 } 00:04:06.907 } 00:04:06.907 } 00:04:06.907 ]' 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.907 18:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:06.908 18:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:06.908 18:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:06.908 00:04:06.908 real 0m0.282s 00:04:06.908 user 0m0.180s 00:04:06.908 sys 0m0.040s 00:04:06.908 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.908 18:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.908 ************************************ 00:04:06.908 END TEST rpc_daemon_integrity 00:04:06.908 ************************************ 00:04:07.166 18:12:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:07.167 18:12:00 rpc -- rpc/rpc.sh@84 -- # killprocess 221223 00:04:07.167 18:12:00 rpc -- common/autotest_common.sh@950 -- # '[' -z 221223 ']' 00:04:07.167 18:12:00 rpc -- common/autotest_common.sh@954 -- # kill -0 221223 00:04:07.167 18:12:00 rpc -- common/autotest_common.sh@955 -- # uname 00:04:07.167 18:12:00 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:07.167 18:12:00 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 221223 00:04:07.167 18:12:00 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:07.167 18:12:00 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:07.167 18:12:00 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 221223' 00:04:07.167 killing process with pid 221223 00:04:07.167 18:12:00 rpc -- common/autotest_common.sh@969 -- # kill 221223 00:04:07.167 18:12:00 rpc -- common/autotest_common.sh@974 -- # wait 221223 00:04:07.425 00:04:07.425 real 0m2.593s 00:04:07.425 user 0m3.267s 00:04:07.425 sys 0m0.733s 00:04:07.425 18:12:00 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.425 18:12:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.425 ************************************ 00:04:07.425 END TEST rpc 00:04:07.425 ************************************ 00:04:07.425 18:12:00 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:07.425 18:12:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.425 18:12:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.425 18:12:00 -- common/autotest_common.sh@10 -- # set +x 00:04:07.425 ************************************ 00:04:07.425 START TEST skip_rpc 00:04:07.425 ************************************ 00:04:07.425 18:12:00 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:07.684 * Looking for test storage... 00:04:07.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:07.684 18:12:00 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:07.684 18:12:00 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:07.684 18:12:00 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:07.684 18:12:00 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.684 18:12:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:07.684 18:12:00 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.684 18:12:00 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:07.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.684 --rc genhtml_branch_coverage=1 00:04:07.684 --rc genhtml_function_coverage=1 00:04:07.684 --rc genhtml_legend=1 00:04:07.684 --rc geninfo_all_blocks=1 00:04:07.684 --rc geninfo_unexecuted_blocks=1 00:04:07.684 00:04:07.684 ' 00:04:07.684 18:12:00 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:07.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.684 --rc genhtml_branch_coverage=1 00:04:07.684 --rc genhtml_function_coverage=1 00:04:07.684 --rc genhtml_legend=1 00:04:07.684 --rc geninfo_all_blocks=1 00:04:07.684 --rc geninfo_unexecuted_blocks=1 00:04:07.684 00:04:07.684 ' 00:04:07.684 18:12:00 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:07.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.684 --rc genhtml_branch_coverage=1 00:04:07.684 --rc genhtml_function_coverage=1 00:04:07.684 --rc genhtml_legend=1 00:04:07.684 --rc geninfo_all_blocks=1 00:04:07.684 --rc geninfo_unexecuted_blocks=1 00:04:07.684 00:04:07.684 ' 00:04:07.684 18:12:00 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:07.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.684 --rc genhtml_branch_coverage=1 00:04:07.684 --rc genhtml_function_coverage=1 00:04:07.684 --rc genhtml_legend=1 00:04:07.684 --rc geninfo_all_blocks=1 00:04:07.684 --rc geninfo_unexecuted_blocks=1 00:04:07.684 00:04:07.684 ' 00:04:07.684 18:12:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:07.684 18:12:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:07.684 18:12:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:07.684 18:12:00 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.684 18:12:00 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.684 18:12:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.684 ************************************ 00:04:07.684 START TEST skip_rpc 00:04:07.684 ************************************ 00:04:07.684 18:12:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:07.684 18:12:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=221930 00:04:07.684 18:12:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.684 18:12:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:07.684 18:12:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:07.684 [2024-10-08 18:12:00.957018] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:04:07.684 [2024-10-08 18:12:00.957060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221930 ] 00:04:07.944 [2024-10-08 18:12:01.025937] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.944 [2024-10-08 18:12:01.104335] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 221930 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 221930 ']' 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 221930 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 221930 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 221930' 00:04:13.215 killing process with pid 221930 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 221930 00:04:13.215 18:12:05 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 221930 00:04:13.215 00:04:13.215 real 0m5.392s 00:04:13.215 user 0m5.149s 00:04:13.215 sys 0m0.282s 00:04:13.215 18:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.215 18:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.215 ************************************ 00:04:13.215 END TEST skip_rpc 00:04:13.215 ************************************ 00:04:13.215 18:12:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:13.215 18:12:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.215 18:12:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.215 18:12:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.215 ************************************ 00:04:13.215 START TEST skip_rpc_with_json 00:04:13.215 ************************************ 00:04:13.215 18:12:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:13.215 18:12:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:13.215 18:12:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=222945 00:04:13.215 18:12:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.215 18:12:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:13.215 18:12:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 222945 00:04:13.215 18:12:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 222945 ']' 00:04:13.215 18:12:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.215 18:12:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:13.215 18:12:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.215 18:12:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:13.215 18:12:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.215 [2024-10-08 18:12:06.416426] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:04:13.215 [2024-10-08 18:12:06.416468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222945 ] 00:04:13.215 [2024-10-08 18:12:06.484075] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.473 [2024-10-08 18:12:06.554684] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.040 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:14.040 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:14.040 18:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:14.040 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.040 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.040 [2024-10-08 18:12:07.250016] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:14.040 request: 00:04:14.040 { 00:04:14.040 "trtype": "tcp", 00:04:14.040 "method": "nvmf_get_transports", 00:04:14.040 "req_id": 1 00:04:14.040 } 00:04:14.040 Got JSON-RPC error response 00:04:14.040 response: 00:04:14.040 { 00:04:14.040 "code": -19, 00:04:14.040 "message": "No such device" 00:04:14.040 } 00:04:14.040 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:14.040 18:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:14.040 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.040 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.040 [2024-10-08 18:12:07.262124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:14.040 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.040 18:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:14.040 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.040 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.300 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.300 18:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:14.300 { 00:04:14.300 "subsystems": [ 00:04:14.300 { 00:04:14.300 "subsystem": "fsdev", 00:04:14.300 "config": [ 00:04:14.300 { 00:04:14.300 "method": "fsdev_set_opts", 00:04:14.300 "params": { 00:04:14.300 "fsdev_io_pool_size": 65535, 00:04:14.300 "fsdev_io_cache_size": 256 00:04:14.300 } 00:04:14.300 } 00:04:14.300 ] 00:04:14.300 }, 00:04:14.300 { 00:04:14.300 "subsystem": "vfio_user_target", 00:04:14.300 "config": null 00:04:14.300 }, 00:04:14.300 { 00:04:14.300 "subsystem": "keyring", 00:04:14.300 "config": [] 00:04:14.300 }, 00:04:14.300 { 00:04:14.300 "subsystem": "iobuf", 00:04:14.300 "config": [ 00:04:14.300 { 00:04:14.300 "method": "iobuf_set_options", 00:04:14.300 "params": { 00:04:14.300 "small_pool_count": 8192, 00:04:14.300 "large_pool_count": 1024, 00:04:14.300 "small_bufsize": 8192, 00:04:14.300 "large_bufsize": 135168 00:04:14.300 } 00:04:14.300 } 00:04:14.300 ] 00:04:14.300 }, 00:04:14.300 { 00:04:14.300 "subsystem": "sock", 00:04:14.300 "config": [ 00:04:14.300 { 00:04:14.300 "method": "sock_set_default_impl", 00:04:14.300 "params": { 00:04:14.300 "impl_name": "posix" 00:04:14.300 } 00:04:14.300 }, 00:04:14.300 { 00:04:14.300 "method": "sock_impl_set_options", 00:04:14.300 "params": { 00:04:14.300 "impl_name": "ssl", 00:04:14.300 "recv_buf_size": 4096, 00:04:14.300 "send_buf_size": 4096, 00:04:14.300 "enable_recv_pipe": true, 00:04:14.300 "enable_quickack": false, 00:04:14.300 "enable_placement_id": 0, 00:04:14.300 "enable_zerocopy_send_server": true, 00:04:14.300 "enable_zerocopy_send_client": false, 00:04:14.300 "zerocopy_threshold": 0, 00:04:14.300 "tls_version": 0, 00:04:14.300 "enable_ktls": false 00:04:14.300 } 00:04:14.300 }, 00:04:14.300 { 00:04:14.300 "method": "sock_impl_set_options", 00:04:14.300 "params": { 00:04:14.300 "impl_name": "posix", 00:04:14.300 "recv_buf_size": 2097152, 00:04:14.300 "send_buf_size": 2097152, 00:04:14.300 "enable_recv_pipe": true, 00:04:14.300 "enable_quickack": false, 00:04:14.300 "enable_placement_id": 0, 00:04:14.300 "enable_zerocopy_send_server": true, 00:04:14.300 "enable_zerocopy_send_client": false, 00:04:14.300 "zerocopy_threshold": 0, 00:04:14.300 "tls_version": 0, 00:04:14.300 "enable_ktls": false 00:04:14.300 } 00:04:14.300 } 00:04:14.300 ] 00:04:14.300 }, 00:04:14.300 { 00:04:14.300 "subsystem": "vmd", 00:04:14.300 "config": [] 00:04:14.300 }, 00:04:14.300 { 00:04:14.300 "subsystem": "accel", 00:04:14.300 "config": [ 00:04:14.300 { 00:04:14.300 "method": "accel_set_options", 00:04:14.300 "params": { 00:04:14.300 "small_cache_size": 128, 00:04:14.300 "large_cache_size": 16, 00:04:14.300 "task_count": 2048, 00:04:14.300 "sequence_count": 2048, 00:04:14.300 "buf_count": 2048 00:04:14.300 } 00:04:14.300 } 00:04:14.300 ] 00:04:14.300 }, 00:04:14.300 { 00:04:14.300 "subsystem": "bdev", 00:04:14.300 "config": [ 00:04:14.300 { 00:04:14.300 "method": "bdev_set_options", 00:04:14.300 "params": { 00:04:14.300 "bdev_io_pool_size": 65535, 00:04:14.300 "bdev_io_cache_size": 256, 00:04:14.300 "bdev_auto_examine": true, 00:04:14.300 "iobuf_small_cache_size": 128, 00:04:14.300 "iobuf_large_cache_size": 16 00:04:14.300 } 00:04:14.300 }, 00:04:14.300 { 00:04:14.300 "method": "bdev_raid_set_options", 00:04:14.300 "params": { 00:04:14.300 "process_window_size_kb": 1024, 00:04:14.300 "process_max_bandwidth_mb_sec": 0 00:04:14.300 } 00:04:14.300 }, 00:04:14.300 { 00:04:14.300 "method": "bdev_iscsi_set_options", 00:04:14.301 "params": { 00:04:14.301 "timeout_sec": 30 00:04:14.301 } 00:04:14.301 }, 00:04:14.301 { 00:04:14.301 "method": "bdev_nvme_set_options", 00:04:14.301 "params": { 00:04:14.301 "action_on_timeout": "none", 00:04:14.301 "timeout_us": 0, 00:04:14.301 "timeout_admin_us": 0, 00:04:14.301 "keep_alive_timeout_ms": 10000, 00:04:14.301 "arbitration_burst": 0, 00:04:14.301 "low_priority_weight": 0, 00:04:14.301 "medium_priority_weight": 0, 00:04:14.301 "high_priority_weight": 0, 00:04:14.301 "nvme_adminq_poll_period_us": 10000, 00:04:14.301 "nvme_ioq_poll_period_us": 0, 00:04:14.301 "io_queue_requests": 0, 00:04:14.301 "delay_cmd_submit": true, 00:04:14.301 "transport_retry_count": 4, 00:04:14.301 "bdev_retry_count": 3, 00:04:14.301 "transport_ack_timeout": 0, 00:04:14.301 "ctrlr_loss_timeout_sec": 0, 00:04:14.301 "reconnect_delay_sec": 0, 00:04:14.301 "fast_io_fail_timeout_sec": 0, 00:04:14.301 "disable_auto_failback": false, 00:04:14.301 "generate_uuids": false, 00:04:14.301 "transport_tos": 0, 00:04:14.301 "nvme_error_stat": false, 00:04:14.301 "rdma_srq_size": 0, 00:04:14.301 "io_path_stat": false, 00:04:14.301 "allow_accel_sequence": false, 00:04:14.301 "rdma_max_cq_size": 0, 00:04:14.301 "rdma_cm_event_timeout_ms": 0, 00:04:14.301 "dhchap_digests": [ 00:04:14.301 "sha256", 00:04:14.301 "sha384", 00:04:14.301 "sha512" 00:04:14.301 ], 00:04:14.301 "dhchap_dhgroups": [ 00:04:14.301 "null", 00:04:14.301 "ffdhe2048", 00:04:14.301 "ffdhe3072", 00:04:14.301 "ffdhe4096", 00:04:14.301 "ffdhe6144", 00:04:14.301 "ffdhe8192" 00:04:14.301 ] 00:04:14.301 } 00:04:14.301 }, 00:04:14.301 { 00:04:14.301 "method": "bdev_nvme_set_hotplug", 00:04:14.301 "params": { 00:04:14.301 "period_us": 100000, 00:04:14.301 "enable": false 00:04:14.301 } 00:04:14.301 }, 00:04:14.301 { 00:04:14.301 "method": "bdev_wait_for_examine" 00:04:14.301 } 00:04:14.301 ] 00:04:14.301 }, 00:04:14.301 { 00:04:14.301 "subsystem": "scsi", 00:04:14.301 "config": null 00:04:14.301 }, 00:04:14.301 { 00:04:14.301 "subsystem": "scheduler", 00:04:14.301 "config": [ 00:04:14.301 { 00:04:14.301 "method": "framework_set_scheduler", 00:04:14.301 "params": { 00:04:14.301 "name": "static" 00:04:14.301 } 00:04:14.301 } 00:04:14.301 ] 00:04:14.301 }, 00:04:14.301 { 00:04:14.301 "subsystem": "vhost_scsi", 00:04:14.301 "config": [] 00:04:14.301 }, 00:04:14.301 { 00:04:14.301 "subsystem": "vhost_blk", 00:04:14.301 "config": [] 00:04:14.301 }, 00:04:14.301 { 00:04:14.301 "subsystem": "ublk", 00:04:14.301 "config": [] 00:04:14.301 }, 00:04:14.301 { 00:04:14.301 "subsystem": "nbd", 00:04:14.301 "config": [] 00:04:14.301 }, 00:04:14.301 { 00:04:14.301 "subsystem": "nvmf", 00:04:14.301 "config": [ 00:04:14.301 { 00:04:14.301 "method": "nvmf_set_config", 00:04:14.301 "params": { 00:04:14.301 "discovery_filter": "match_any", 00:04:14.301 "admin_cmd_passthru": { 00:04:14.301 "identify_ctrlr": false 00:04:14.301 }, 00:04:14.301 "dhchap_digests": [ 00:04:14.301 "sha256", 00:04:14.301 "sha384", 00:04:14.301 "sha512" 00:04:14.301 ], 00:04:14.301 "dhchap_dhgroups": [ 00:04:14.301 "null", 00:04:14.301 "ffdhe2048", 00:04:14.301 "ffdhe3072", 00:04:14.301 "ffdhe4096", 00:04:14.301 "ffdhe6144", 00:04:14.301 "ffdhe8192" 00:04:14.301 ] 00:04:14.301 } 00:04:14.301 }, 00:04:14.301 { 00:04:14.301 "method": "nvmf_set_max_subsystems", 00:04:14.301 "params": { 00:04:14.301 "max_subsystems": 1024 00:04:14.301 } 00:04:14.301 }, 00:04:14.301 { 00:04:14.301 "method": "nvmf_set_crdt", 00:04:14.301 "params": { 00:04:14.301 "crdt1": 0, 00:04:14.301 "crdt2": 0, 00:04:14.301 "crdt3": 0 00:04:14.301 } 00:04:14.301 }, 00:04:14.301 { 00:04:14.301 "method": "nvmf_create_transport", 00:04:14.301 "params": { 00:04:14.301 "trtype": "TCP", 00:04:14.301 "max_queue_depth": 128, 00:04:14.301 "max_io_qpairs_per_ctrlr": 127, 00:04:14.301 "in_capsule_data_size": 4096, 00:04:14.301 "max_io_size": 131072, 00:04:14.301 "io_unit_size": 131072, 00:04:14.301 "max_aq_depth": 128, 00:04:14.301 "num_shared_buffers": 511, 00:04:14.301 "buf_cache_size": 4294967295, 00:04:14.301 "dif_insert_or_strip": false, 00:04:14.301 "zcopy": false, 00:04:14.301 "c2h_success": true, 00:04:14.301 "sock_priority": 0, 00:04:14.301 "abort_timeout_sec": 1, 00:04:14.301 "ack_timeout": 0, 00:04:14.301 "data_wr_pool_size": 0 00:04:14.301 } 00:04:14.301 } 00:04:14.301 ] 00:04:14.301 }, 00:04:14.301 { 00:04:14.301 "subsystem": "iscsi", 00:04:14.301 "config": [ 00:04:14.301 { 00:04:14.301 "method": "iscsi_set_options", 00:04:14.301 "params": { 00:04:14.301 "node_base": "iqn.2016-06.io.spdk", 00:04:14.301 "max_sessions": 128, 00:04:14.301 "max_connections_per_session": 2, 00:04:14.301 "max_queue_depth": 64, 00:04:14.301 "default_time2wait": 2, 00:04:14.301 "default_time2retain": 20, 00:04:14.301 "first_burst_length": 8192, 00:04:14.301 "immediate_data": true, 00:04:14.301 "allow_duplicated_isid": false, 00:04:14.301 "error_recovery_level": 0, 00:04:14.301 "nop_timeout": 60, 00:04:14.301 "nop_in_interval": 30, 00:04:14.301 "disable_chap": false, 00:04:14.301 "require_chap": false, 00:04:14.301 "mutual_chap": false, 00:04:14.301 "chap_group": 0, 00:04:14.301 "max_large_datain_per_connection": 64, 00:04:14.301 "max_r2t_per_connection": 4, 00:04:14.301 "pdu_pool_size": 36864, 00:04:14.301 "immediate_data_pool_size": 16384, 00:04:14.301 "data_out_pool_size": 2048 00:04:14.301 } 00:04:14.301 } 00:04:14.301 ] 00:04:14.301 } 00:04:14.301 ] 00:04:14.301 } 00:04:14.301 18:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:14.301 18:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 222945 00:04:14.301 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 222945 ']' 00:04:14.301 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 222945 00:04:14.301 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:14.301 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:14.301 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 222945 00:04:14.301 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:14.301 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:14.301 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 222945' 00:04:14.301 killing process with pid 222945 00:04:14.301 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 222945 00:04:14.302 18:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 222945 00:04:14.561 18:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:14.561 18:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=223188 00:04:14.561 18:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:19.841 18:12:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 223188 00:04:19.841 18:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 223188 ']' 00:04:19.841 18:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 223188 00:04:19.841 18:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:19.841 18:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:19.841 18:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 223188 00:04:19.841 18:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:19.841 18:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:19.841 18:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 223188' 00:04:19.841 killing process with pid 223188 00:04:19.841 18:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 223188 00:04:19.841 18:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 223188 00:04:20.100 18:12:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:20.101 00:04:20.101 real 0m6.842s 00:04:20.101 user 0m6.671s 00:04:20.101 sys 0m0.645s 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.101 ************************************ 00:04:20.101 END TEST skip_rpc_with_json 00:04:20.101 ************************************ 00:04:20.101 18:12:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:20.101 18:12:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.101 18:12:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.101 18:12:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.101 ************************************ 00:04:20.101 START TEST skip_rpc_with_delay 00:04:20.101 ************************************ 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.101 [2024-10-08 18:12:13.338809] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:20.101 [2024-10-08 18:12:13.338876] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:20.101 00:04:20.101 real 0m0.070s 00:04:20.101 user 0m0.039s 00:04:20.101 sys 0m0.031s 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.101 18:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:20.101 ************************************ 00:04:20.101 END TEST skip_rpc_with_delay 00:04:20.101 ************************************ 00:04:20.101 18:12:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:20.101 18:12:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:20.101 18:12:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:20.101 18:12:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.101 18:12:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.101 18:12:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.360 ************************************ 00:04:20.360 START TEST exit_on_failed_rpc_init 00:04:20.360 ************************************ 00:04:20.360 18:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:20.360 18:12:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=224544 00:04:20.360 18:12:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 224544 00:04:20.360 18:12:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.360 18:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 224544 ']' 00:04:20.360 18:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.360 18:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:20.360 18:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.360 18:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:20.360 18:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.360 [2024-10-08 18:12:13.478997] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:04:20.360 [2024-10-08 18:12:13.479043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224544 ] 00:04:20.360 [2024-10-08 18:12:13.546059] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.360 [2024-10-08 18:12:13.624289] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.298 [2024-10-08 18:12:14.360718] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:04:21.298 [2024-10-08 18:12:14.360764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224779 ] 00:04:21.298 [2024-10-08 18:12:14.426759] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.298 [2024-10-08 18:12:14.498358] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.298 [2024-10-08 18:12:14.498432] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:21.298 [2024-10-08 18:12:14.498442] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:21.298 [2024-10-08 18:12:14.498448] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 224544 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 224544 ']' 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 224544 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:21.298 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 224544 00:04:21.557 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:21.557 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:21.557 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 224544' 00:04:21.557 killing process with pid 224544 00:04:21.557 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 224544 00:04:21.557 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 224544 00:04:21.817 00:04:21.817 real 0m1.524s 00:04:21.817 user 0m1.749s 00:04:21.817 sys 0m0.446s 00:04:21.817 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.817 18:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.817 ************************************ 00:04:21.817 END TEST exit_on_failed_rpc_init 00:04:21.817 ************************************ 00:04:21.817 18:12:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:21.817 00:04:21.817 real 0m14.290s 00:04:21.817 user 0m13.826s 00:04:21.817 sys 0m1.679s 00:04:21.817 18:12:14 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.817 18:12:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.817 ************************************ 00:04:21.817 END TEST skip_rpc 00:04:21.817 ************************************ 00:04:21.817 18:12:15 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:21.817 18:12:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.817 18:12:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.817 18:12:15 -- common/autotest_common.sh@10 -- # set +x 00:04:21.817 ************************************ 00:04:21.817 START TEST rpc_client 00:04:21.817 ************************************ 00:04:21.817 18:12:15 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:22.078 * Looking for test storage... 00:04:22.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:22.078 18:12:15 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:22.078 18:12:15 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:22.078 18:12:15 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:22.078 18:12:15 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.078 18:12:15 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:22.078 18:12:15 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.078 18:12:15 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:22.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.078 --rc genhtml_branch_coverage=1 00:04:22.078 --rc genhtml_function_coverage=1 00:04:22.078 --rc genhtml_legend=1 00:04:22.078 --rc geninfo_all_blocks=1 00:04:22.078 --rc geninfo_unexecuted_blocks=1 00:04:22.078 00:04:22.078 ' 00:04:22.078 18:12:15 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:22.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.078 --rc genhtml_branch_coverage=1 00:04:22.078 --rc genhtml_function_coverage=1 00:04:22.078 --rc genhtml_legend=1 00:04:22.078 --rc geninfo_all_blocks=1 00:04:22.078 --rc geninfo_unexecuted_blocks=1 00:04:22.078 00:04:22.078 ' 00:04:22.078 18:12:15 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:22.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.078 --rc genhtml_branch_coverage=1 00:04:22.078 --rc genhtml_function_coverage=1 00:04:22.078 --rc genhtml_legend=1 00:04:22.078 --rc geninfo_all_blocks=1 00:04:22.078 --rc geninfo_unexecuted_blocks=1 00:04:22.078 00:04:22.078 ' 00:04:22.078 18:12:15 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:22.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.078 --rc genhtml_branch_coverage=1 00:04:22.078 --rc genhtml_function_coverage=1 00:04:22.078 --rc genhtml_legend=1 00:04:22.078 --rc geninfo_all_blocks=1 00:04:22.078 --rc geninfo_unexecuted_blocks=1 00:04:22.078 00:04:22.078 ' 00:04:22.078 18:12:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:22.078 OK 00:04:22.078 18:12:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:22.078 00:04:22.078 real 0m0.203s 00:04:22.078 user 0m0.116s 00:04:22.078 sys 0m0.101s 00:04:22.078 18:12:15 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.078 18:12:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:22.078 ************************************ 00:04:22.078 END TEST rpc_client 00:04:22.078 ************************************ 00:04:22.078 18:12:15 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:22.078 18:12:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.078 18:12:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.078 18:12:15 -- common/autotest_common.sh@10 -- # set +x 00:04:22.078 ************************************ 00:04:22.078 START TEST json_config 00:04:22.078 ************************************ 00:04:22.078 18:12:15 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:22.078 18:12:15 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:22.078 18:12:15 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:22.078 18:12:15 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:22.338 18:12:15 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:22.338 18:12:15 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.338 18:12:15 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.338 18:12:15 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.338 18:12:15 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.338 18:12:15 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.338 18:12:15 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.338 18:12:15 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.338 18:12:15 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.338 18:12:15 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.338 18:12:15 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.338 18:12:15 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.338 18:12:15 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:22.338 18:12:15 json_config -- scripts/common.sh@345 -- # : 1 00:04:22.338 18:12:15 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.338 18:12:15 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.338 18:12:15 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:22.338 18:12:15 json_config -- scripts/common.sh@353 -- # local d=1 00:04:22.338 18:12:15 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.338 18:12:15 json_config -- scripts/common.sh@355 -- # echo 1 00:04:22.338 18:12:15 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.338 18:12:15 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:22.338 18:12:15 json_config -- scripts/common.sh@353 -- # local d=2 00:04:22.338 18:12:15 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.338 18:12:15 json_config -- scripts/common.sh@355 -- # echo 2 00:04:22.338 18:12:15 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.338 18:12:15 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.338 18:12:15 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.338 18:12:15 json_config -- scripts/common.sh@368 -- # return 0 00:04:22.338 18:12:15 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.338 18:12:15 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:22.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.338 --rc genhtml_branch_coverage=1 00:04:22.338 --rc genhtml_function_coverage=1 00:04:22.338 --rc genhtml_legend=1 00:04:22.338 --rc geninfo_all_blocks=1 00:04:22.338 --rc geninfo_unexecuted_blocks=1 00:04:22.338 00:04:22.338 ' 00:04:22.338 18:12:15 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:22.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.338 --rc genhtml_branch_coverage=1 00:04:22.339 --rc genhtml_function_coverage=1 00:04:22.339 --rc genhtml_legend=1 00:04:22.339 --rc geninfo_all_blocks=1 00:04:22.339 --rc geninfo_unexecuted_blocks=1 00:04:22.339 00:04:22.339 ' 00:04:22.339 18:12:15 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:22.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.339 --rc genhtml_branch_coverage=1 00:04:22.339 --rc genhtml_function_coverage=1 00:04:22.339 --rc genhtml_legend=1 00:04:22.339 --rc geninfo_all_blocks=1 00:04:22.339 --rc geninfo_unexecuted_blocks=1 00:04:22.339 00:04:22.339 ' 00:04:22.339 18:12:15 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:22.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.339 --rc genhtml_branch_coverage=1 00:04:22.339 --rc genhtml_function_coverage=1 00:04:22.339 --rc genhtml_legend=1 00:04:22.339 --rc geninfo_all_blocks=1 00:04:22.339 --rc geninfo_unexecuted_blocks=1 00:04:22.339 00:04:22.339 ' 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:22.339 18:12:15 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:22.339 18:12:15 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.339 18:12:15 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.339 18:12:15 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.339 18:12:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.339 18:12:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.339 18:12:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.339 18:12:15 json_config -- paths/export.sh@5 -- # export PATH 00:04:22.339 18:12:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@51 -- # : 0 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:22.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:22.339 18:12:15 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:22.339 INFO: JSON configuration test init 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:22.339 18:12:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.339 18:12:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:22.339 18:12:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.339 18:12:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.339 18:12:15 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:22.339 18:12:15 json_config -- json_config/common.sh@9 -- # local app=target 00:04:22.339 18:12:15 json_config -- json_config/common.sh@10 -- # shift 00:04:22.339 18:12:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:22.339 18:12:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:22.339 18:12:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:22.339 18:12:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.339 18:12:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.339 18:12:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=225131 00:04:22.339 18:12:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:22.339 Waiting for target to run... 00:04:22.339 18:12:15 json_config -- json_config/common.sh@25 -- # waitforlisten 225131 /var/tmp/spdk_tgt.sock 00:04:22.339 18:12:15 json_config -- common/autotest_common.sh@831 -- # '[' -z 225131 ']' 00:04:22.339 18:12:15 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:22.339 18:12:15 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:22.339 18:12:15 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:22.339 18:12:15 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:22.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:22.339 18:12:15 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:22.339 18:12:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.339 [2024-10-08 18:12:15.574563] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:04:22.339 [2024-10-08 18:12:15.574612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid225131 ] 00:04:22.598 [2024-10-08 18:12:15.856568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.856 [2024-10-08 18:12:15.924211] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.115 18:12:16 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:23.115 18:12:16 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:23.115 18:12:16 json_config -- json_config/common.sh@26 -- # echo '' 00:04:23.115 00:04:23.115 18:12:16 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:23.115 18:12:16 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:23.115 18:12:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:23.115 18:12:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.115 18:12:16 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:23.115 18:12:16 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:23.115 18:12:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:23.115 18:12:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.115 18:12:16 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:23.115 18:12:16 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:23.115 18:12:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:26.434 18:12:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.434 18:12:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:26.434 18:12:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@54 -- # sort 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:26.434 18:12:19 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:26.434 18:12:19 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:26.434 18:12:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.693 18:12:19 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:26.693 18:12:19 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:26.693 18:12:19 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:26.693 18:12:19 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:26.693 18:12:19 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:26.693 18:12:19 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:26.693 18:12:19 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:26.693 18:12:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.693 18:12:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.693 18:12:19 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:26.693 18:12:19 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:26.694 18:12:19 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:26.694 18:12:19 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.694 18:12:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.694 MallocForNvmf0 00:04:26.694 18:12:19 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.694 18:12:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.952 MallocForNvmf1 00:04:26.952 18:12:20 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:26.952 18:12:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:27.210 [2024-10-08 18:12:20.349086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:27.210 18:12:20 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.210 18:12:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.469 18:12:20 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.469 18:12:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.469 18:12:20 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.469 18:12:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.727 18:12:20 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.727 18:12:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.986 [2024-10-08 18:12:21.127511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:27.986 18:12:21 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:27.986 18:12:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.986 18:12:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.986 18:12:21 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:27.986 18:12:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.986 18:12:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.986 18:12:21 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:27.986 18:12:21 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:27.986 18:12:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.244 MallocBdevForConfigChangeCheck 00:04:28.244 18:12:21 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:28.244 18:12:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:28.244 18:12:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.244 18:12:21 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:28.244 18:12:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:28.503 18:12:21 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:28.503 INFO: shutting down applications... 00:04:28.503 18:12:21 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:28.503 18:12:21 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:28.503 18:12:21 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:28.503 18:12:21 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:31.037 Calling clear_iscsi_subsystem 00:04:31.037 Calling clear_nvmf_subsystem 00:04:31.037 Calling clear_nbd_subsystem 00:04:31.037 Calling clear_ublk_subsystem 00:04:31.037 Calling clear_vhost_blk_subsystem 00:04:31.037 Calling clear_vhost_scsi_subsystem 00:04:31.037 Calling clear_bdev_subsystem 00:04:31.037 18:12:23 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:31.037 18:12:23 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:31.037 18:12:23 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:31.037 18:12:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.037 18:12:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:31.037 18:12:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:31.037 18:12:24 json_config -- json_config/json_config.sh@352 -- # break 00:04:31.037 18:12:24 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:31.037 18:12:24 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:31.037 18:12:24 json_config -- json_config/common.sh@31 -- # local app=target 00:04:31.037 18:12:24 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:31.037 18:12:24 json_config -- json_config/common.sh@35 -- # [[ -n 225131 ]] 00:04:31.037 18:12:24 json_config -- json_config/common.sh@38 -- # kill -SIGINT 225131 00:04:31.037 18:12:24 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:31.037 18:12:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.037 18:12:24 json_config -- json_config/common.sh@41 -- # kill -0 225131 00:04:31.037 18:12:24 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:31.605 18:12:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:31.606 18:12:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.606 18:12:24 json_config -- json_config/common.sh@41 -- # kill -0 225131 00:04:31.606 18:12:24 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:31.606 18:12:24 json_config -- json_config/common.sh@43 -- # break 00:04:31.606 18:12:24 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:31.606 18:12:24 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:31.606 SPDK target shutdown done 00:04:31.606 18:12:24 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:31.606 INFO: relaunching applications... 00:04:31.606 18:12:24 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.606 18:12:24 json_config -- json_config/common.sh@9 -- # local app=target 00:04:31.606 18:12:24 json_config -- json_config/common.sh@10 -- # shift 00:04:31.606 18:12:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:31.606 18:12:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:31.606 18:12:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:31.606 18:12:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.606 18:12:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.606 18:12:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=226837 00:04:31.606 18:12:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:31.606 Waiting for target to run... 00:04:31.606 18:12:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.606 18:12:24 json_config -- json_config/common.sh@25 -- # waitforlisten 226837 /var/tmp/spdk_tgt.sock 00:04:31.606 18:12:24 json_config -- common/autotest_common.sh@831 -- # '[' -z 226837 ']' 00:04:31.606 18:12:24 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:31.606 18:12:24 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:31.606 18:12:24 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:31.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:31.606 18:12:24 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:31.606 18:12:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.606 [2024-10-08 18:12:24.872070] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:04:31.606 [2024-10-08 18:12:24.872135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid226837 ] 00:04:31.865 [2024-10-08 18:12:25.155709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.124 [2024-10-08 18:12:25.223586] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.412 [2024-10-08 18:12:28.253169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:35.412 [2024-10-08 18:12:28.285520] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:35.412 18:12:28 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.412 18:12:28 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:35.412 18:12:28 json_config -- json_config/common.sh@26 -- # echo '' 00:04:35.412 00:04:35.412 18:12:28 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:35.412 18:12:28 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:35.412 INFO: Checking if target configuration is the same... 00:04:35.413 18:12:28 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.413 18:12:28 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:35.413 18:12:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.413 + '[' 2 -ne 2 ']' 00:04:35.413 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:35.413 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:35.413 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:35.413 +++ basename /dev/fd/62 00:04:35.413 ++ mktemp /tmp/62.XXX 00:04:35.413 + tmp_file_1=/tmp/62.ZyX 00:04:35.413 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.413 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:35.413 + tmp_file_2=/tmp/spdk_tgt_config.json.50z 00:04:35.413 + ret=0 00:04:35.413 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.413 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.413 + diff -u /tmp/62.ZyX /tmp/spdk_tgt_config.json.50z 00:04:35.413 + echo 'INFO: JSON config files are the same' 00:04:35.413 INFO: JSON config files are the same 00:04:35.413 + rm /tmp/62.ZyX /tmp/spdk_tgt_config.json.50z 00:04:35.413 + exit 0 00:04:35.413 18:12:28 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:35.413 18:12:28 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:35.413 INFO: changing configuration and checking if this can be detected... 00:04:35.413 18:12:28 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.413 18:12:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.671 18:12:28 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.671 18:12:28 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:35.672 18:12:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.672 + '[' 2 -ne 2 ']' 00:04:35.672 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:35.672 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:35.672 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:35.672 +++ basename /dev/fd/62 00:04:35.672 ++ mktemp /tmp/62.XXX 00:04:35.672 + tmp_file_1=/tmp/62.Ei1 00:04:35.672 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.672 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:35.672 + tmp_file_2=/tmp/spdk_tgt_config.json.vQQ 00:04:35.672 + ret=0 00:04:35.672 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.930 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:36.189 + diff -u /tmp/62.Ei1 /tmp/spdk_tgt_config.json.vQQ 00:04:36.189 + ret=1 00:04:36.189 + echo '=== Start of file: /tmp/62.Ei1 ===' 00:04:36.189 + cat /tmp/62.Ei1 00:04:36.189 + echo '=== End of file: /tmp/62.Ei1 ===' 00:04:36.189 + echo '' 00:04:36.189 + echo '=== Start of file: /tmp/spdk_tgt_config.json.vQQ ===' 00:04:36.189 + cat /tmp/spdk_tgt_config.json.vQQ 00:04:36.189 + echo '=== End of file: /tmp/spdk_tgt_config.json.vQQ ===' 00:04:36.189 + echo '' 00:04:36.189 + rm /tmp/62.Ei1 /tmp/spdk_tgt_config.json.vQQ 00:04:36.189 + exit 1 00:04:36.189 18:12:29 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:36.189 INFO: configuration change detected. 00:04:36.189 18:12:29 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:36.189 18:12:29 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:36.189 18:12:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:36.189 18:12:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.189 18:12:29 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:36.189 18:12:29 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:36.189 18:12:29 json_config -- json_config/json_config.sh@324 -- # [[ -n 226837 ]] 00:04:36.189 18:12:29 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:36.189 18:12:29 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:36.189 18:12:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:36.189 18:12:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.189 18:12:29 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:36.189 18:12:29 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:36.189 18:12:29 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:36.189 18:12:29 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:36.189 18:12:29 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:36.189 18:12:29 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:36.189 18:12:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:36.189 18:12:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.189 18:12:29 json_config -- json_config/json_config.sh@330 -- # killprocess 226837 00:04:36.189 18:12:29 json_config -- common/autotest_common.sh@950 -- # '[' -z 226837 ']' 00:04:36.189 18:12:29 json_config -- common/autotest_common.sh@954 -- # kill -0 226837 00:04:36.189 18:12:29 json_config -- common/autotest_common.sh@955 -- # uname 00:04:36.189 18:12:29 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.189 18:12:29 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 226837 00:04:36.189 18:12:29 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.189 18:12:29 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.189 18:12:29 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 226837' 00:04:36.189 killing process with pid 226837 00:04:36.189 18:12:29 json_config -- common/autotest_common.sh@969 -- # kill 226837 00:04:36.189 18:12:29 json_config -- common/autotest_common.sh@974 -- # wait 226837 00:04:38.724 18:12:31 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.724 18:12:31 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:38.724 18:12:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:38.724 18:12:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.724 18:12:31 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:38.724 18:12:31 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:38.724 INFO: Success 00:04:38.724 00:04:38.724 real 0m16.131s 00:04:38.724 user 0m16.756s 00:04:38.724 sys 0m2.340s 00:04:38.724 18:12:31 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.724 18:12:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.724 ************************************ 00:04:38.724 END TEST json_config 00:04:38.724 ************************************ 00:04:38.724 18:12:31 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:38.724 18:12:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.724 18:12:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.724 18:12:31 -- common/autotest_common.sh@10 -- # set +x 00:04:38.724 ************************************ 00:04:38.724 START TEST json_config_extra_key 00:04:38.724 ************************************ 00:04:38.724 18:12:31 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:38.724 18:12:31 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:38.724 18:12:31 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:38.724 18:12:31 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:38.724 18:12:31 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:38.724 18:12:31 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.724 18:12:31 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:38.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.724 --rc genhtml_branch_coverage=1 00:04:38.724 --rc genhtml_function_coverage=1 00:04:38.724 --rc genhtml_legend=1 00:04:38.724 --rc geninfo_all_blocks=1 00:04:38.724 --rc geninfo_unexecuted_blocks=1 00:04:38.724 00:04:38.724 ' 00:04:38.724 18:12:31 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:38.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.724 --rc genhtml_branch_coverage=1 00:04:38.724 --rc genhtml_function_coverage=1 00:04:38.724 --rc genhtml_legend=1 00:04:38.724 --rc geninfo_all_blocks=1 00:04:38.724 --rc geninfo_unexecuted_blocks=1 00:04:38.724 00:04:38.724 ' 00:04:38.724 18:12:31 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:38.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.724 --rc genhtml_branch_coverage=1 00:04:38.724 --rc genhtml_function_coverage=1 00:04:38.724 --rc genhtml_legend=1 00:04:38.724 --rc geninfo_all_blocks=1 00:04:38.724 --rc geninfo_unexecuted_blocks=1 00:04:38.724 00:04:38.724 ' 00:04:38.724 18:12:31 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:38.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.724 --rc genhtml_branch_coverage=1 00:04:38.724 --rc genhtml_function_coverage=1 00:04:38.724 --rc genhtml_legend=1 00:04:38.724 --rc geninfo_all_blocks=1 00:04:38.724 --rc geninfo_unexecuted_blocks=1 00:04:38.724 00:04:38.724 ' 00:04:38.724 18:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.724 18:12:31 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.724 18:12:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.724 18:12:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.724 18:12:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.724 18:12:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:38.724 18:12:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:38.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:38.724 18:12:31 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:38.724 18:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:38.725 18:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:38.725 18:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:38.725 18:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:38.725 18:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:38.725 18:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:38.725 18:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:38.725 18:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:38.725 18:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:38.725 18:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:38.725 18:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:38.725 INFO: launching applications... 00:04:38.725 18:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:38.725 18:12:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:38.725 18:12:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:38.725 18:12:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:38.725 18:12:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:38.725 18:12:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:38.725 18:12:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.725 18:12:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.725 18:12:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=228129 00:04:38.725 18:12:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:38.725 Waiting for target to run... 00:04:38.725 18:12:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 228129 /var/tmp/spdk_tgt.sock 00:04:38.725 18:12:31 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 228129 ']' 00:04:38.725 18:12:31 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:38.725 18:12:31 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:38.725 18:12:31 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.725 18:12:31 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:38.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:38.725 18:12:31 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.725 18:12:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:38.725 [2024-10-08 18:12:31.772266] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:04:38.725 [2024-10-08 18:12:31.772316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228129 ] 00:04:38.984 [2024-10-08 18:12:32.226751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.242 [2024-10-08 18:12:32.312643] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.501 18:12:32 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:39.501 18:12:32 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:39.501 18:12:32 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:39.501 00:04:39.501 18:12:32 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:39.501 INFO: shutting down applications... 00:04:39.501 18:12:32 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:39.501 18:12:32 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:39.501 18:12:32 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:39.501 18:12:32 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 228129 ]] 00:04:39.501 18:12:32 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 228129 00:04:39.501 18:12:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:39.501 18:12:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.501 18:12:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 228129 00:04:39.501 18:12:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:40.069 18:12:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:40.069 18:12:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.069 18:12:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 228129 00:04:40.069 18:12:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:40.069 18:12:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:40.069 18:12:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:40.069 18:12:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:40.069 SPDK target shutdown done 00:04:40.069 18:12:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:40.069 Success 00:04:40.069 00:04:40.069 real 0m1.569s 00:04:40.069 user 0m1.195s 00:04:40.069 sys 0m0.572s 00:04:40.069 18:12:33 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.069 18:12:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:40.069 ************************************ 00:04:40.069 END TEST json_config_extra_key 00:04:40.069 ************************************ 00:04:40.069 18:12:33 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.069 18:12:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.069 18:12:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.069 18:12:33 -- common/autotest_common.sh@10 -- # set +x 00:04:40.069 ************************************ 00:04:40.069 START TEST alias_rpc 00:04:40.069 ************************************ 00:04:40.069 18:12:33 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.069 * Looking for test storage... 00:04:40.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:40.070 18:12:33 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:40.070 18:12:33 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:40.070 18:12:33 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:40.070 18:12:33 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.070 18:12:33 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:40.070 18:12:33 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.070 18:12:33 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:40.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.070 --rc genhtml_branch_coverage=1 00:04:40.070 --rc genhtml_function_coverage=1 00:04:40.070 --rc genhtml_legend=1 00:04:40.070 --rc geninfo_all_blocks=1 00:04:40.070 --rc geninfo_unexecuted_blocks=1 00:04:40.070 00:04:40.070 ' 00:04:40.070 18:12:33 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:40.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.070 --rc genhtml_branch_coverage=1 00:04:40.070 --rc genhtml_function_coverage=1 00:04:40.070 --rc genhtml_legend=1 00:04:40.070 --rc geninfo_all_blocks=1 00:04:40.070 --rc geninfo_unexecuted_blocks=1 00:04:40.070 00:04:40.070 ' 00:04:40.070 18:12:33 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:40.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.070 --rc genhtml_branch_coverage=1 00:04:40.070 --rc genhtml_function_coverage=1 00:04:40.070 --rc genhtml_legend=1 00:04:40.070 --rc geninfo_all_blocks=1 00:04:40.070 --rc geninfo_unexecuted_blocks=1 00:04:40.070 00:04:40.070 ' 00:04:40.070 18:12:33 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:40.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.070 --rc genhtml_branch_coverage=1 00:04:40.070 --rc genhtml_function_coverage=1 00:04:40.070 --rc genhtml_legend=1 00:04:40.070 --rc geninfo_all_blocks=1 00:04:40.070 --rc geninfo_unexecuted_blocks=1 00:04:40.070 00:04:40.070 ' 00:04:40.070 18:12:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:40.070 18:12:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=228453 00:04:40.070 18:12:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 228453 00:04:40.070 18:12:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.070 18:12:33 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 228453 ']' 00:04:40.070 18:12:33 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.070 18:12:33 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:40.070 18:12:33 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.070 18:12:33 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:40.070 18:12:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.329 [2024-10-08 18:12:33.409338] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:04:40.329 [2024-10-08 18:12:33.409393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228453 ] 00:04:40.329 [2024-10-08 18:12:33.458045] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.329 [2024-10-08 18:12:33.534321] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.265 18:12:34 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.265 18:12:34 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:41.265 18:12:34 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:41.265 18:12:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 228453 00:04:41.265 18:12:34 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 228453 ']' 00:04:41.265 18:12:34 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 228453 00:04:41.265 18:12:34 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:41.265 18:12:34 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:41.265 18:12:34 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 228453 00:04:41.265 18:12:34 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:41.265 18:12:34 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:41.265 18:12:34 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 228453' 00:04:41.265 killing process with pid 228453 00:04:41.265 18:12:34 alias_rpc -- common/autotest_common.sh@969 -- # kill 228453 00:04:41.265 18:12:34 alias_rpc -- common/autotest_common.sh@974 -- # wait 228453 00:04:41.524 00:04:41.524 real 0m1.639s 00:04:41.524 user 0m1.800s 00:04:41.524 sys 0m0.430s 00:04:41.524 18:12:34 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.524 18:12:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.524 ************************************ 00:04:41.524 END TEST alias_rpc 00:04:41.524 ************************************ 00:04:41.783 18:12:34 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:41.783 18:12:34 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:41.783 18:12:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.783 18:12:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.783 18:12:34 -- common/autotest_common.sh@10 -- # set +x 00:04:41.783 ************************************ 00:04:41.783 START TEST spdkcli_tcp 00:04:41.783 ************************************ 00:04:41.783 18:12:34 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:41.783 * Looking for test storage... 00:04:41.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:41.783 18:12:34 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:41.783 18:12:34 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:04:41.783 18:12:34 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:41.783 18:12:35 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.783 18:12:35 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:41.783 18:12:35 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.783 18:12:35 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:41.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.783 --rc genhtml_branch_coverage=1 00:04:41.783 --rc genhtml_function_coverage=1 00:04:41.783 --rc genhtml_legend=1 00:04:41.783 --rc geninfo_all_blocks=1 00:04:41.783 --rc geninfo_unexecuted_blocks=1 00:04:41.783 00:04:41.783 ' 00:04:41.783 18:12:35 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:41.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.783 --rc genhtml_branch_coverage=1 00:04:41.783 --rc genhtml_function_coverage=1 00:04:41.783 --rc genhtml_legend=1 00:04:41.783 --rc geninfo_all_blocks=1 00:04:41.783 --rc geninfo_unexecuted_blocks=1 00:04:41.783 00:04:41.783 ' 00:04:41.783 18:12:35 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:41.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.783 --rc genhtml_branch_coverage=1 00:04:41.783 --rc genhtml_function_coverage=1 00:04:41.783 --rc genhtml_legend=1 00:04:41.783 --rc geninfo_all_blocks=1 00:04:41.783 --rc geninfo_unexecuted_blocks=1 00:04:41.783 00:04:41.783 ' 00:04:41.783 18:12:35 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:41.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.783 --rc genhtml_branch_coverage=1 00:04:41.783 --rc genhtml_function_coverage=1 00:04:41.783 --rc genhtml_legend=1 00:04:41.783 --rc geninfo_all_blocks=1 00:04:41.784 --rc geninfo_unexecuted_blocks=1 00:04:41.784 00:04:41.784 ' 00:04:41.784 18:12:35 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:41.784 18:12:35 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:41.784 18:12:35 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:41.784 18:12:35 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:41.784 18:12:35 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:41.784 18:12:35 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:41.784 18:12:35 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:41.784 18:12:35 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:41.784 18:12:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.784 18:12:35 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=228745 00:04:41.784 18:12:35 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 228745 00:04:41.784 18:12:35 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:41.784 18:12:35 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 228745 ']' 00:04:41.784 18:12:35 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.784 18:12:35 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:41.784 18:12:35 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.784 18:12:35 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:41.784 18:12:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.043 [2024-10-08 18:12:35.122816] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:04:42.043 [2024-10-08 18:12:35.122863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228745 ] 00:04:42.043 [2024-10-08 18:12:35.188303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.043 [2024-10-08 18:12:35.260688] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.043 [2024-10-08 18:12:35.260690] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.979 18:12:35 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:42.979 18:12:35 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:42.979 18:12:35 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=228976 00:04:42.979 18:12:35 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:42.979 18:12:35 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:42.979 [ 00:04:42.979 "bdev_malloc_delete", 00:04:42.979 "bdev_malloc_create", 00:04:42.979 "bdev_null_resize", 00:04:42.979 "bdev_null_delete", 00:04:42.979 "bdev_null_create", 00:04:42.979 "bdev_nvme_cuse_unregister", 00:04:42.979 "bdev_nvme_cuse_register", 00:04:42.979 "bdev_opal_new_user", 00:04:42.979 "bdev_opal_set_lock_state", 00:04:42.979 "bdev_opal_delete", 00:04:42.979 "bdev_opal_get_info", 00:04:42.979 "bdev_opal_create", 00:04:42.979 "bdev_nvme_opal_revert", 00:04:42.979 "bdev_nvme_opal_init", 00:04:42.979 "bdev_nvme_send_cmd", 00:04:42.979 "bdev_nvme_set_keys", 00:04:42.979 "bdev_nvme_get_path_iostat", 00:04:42.979 "bdev_nvme_get_mdns_discovery_info", 00:04:42.979 "bdev_nvme_stop_mdns_discovery", 00:04:42.979 "bdev_nvme_start_mdns_discovery", 00:04:42.979 "bdev_nvme_set_multipath_policy", 00:04:42.979 "bdev_nvme_set_preferred_path", 00:04:42.979 "bdev_nvme_get_io_paths", 00:04:42.979 "bdev_nvme_remove_error_injection", 00:04:42.979 "bdev_nvme_add_error_injection", 00:04:42.979 "bdev_nvme_get_discovery_info", 00:04:42.979 "bdev_nvme_stop_discovery", 00:04:42.979 "bdev_nvme_start_discovery", 00:04:42.979 "bdev_nvme_get_controller_health_info", 00:04:42.979 "bdev_nvme_disable_controller", 00:04:42.979 "bdev_nvme_enable_controller", 00:04:42.979 "bdev_nvme_reset_controller", 00:04:42.979 "bdev_nvme_get_transport_statistics", 00:04:42.979 "bdev_nvme_apply_firmware", 00:04:42.979 "bdev_nvme_detach_controller", 00:04:42.979 "bdev_nvme_get_controllers", 00:04:42.979 "bdev_nvme_attach_controller", 00:04:42.979 "bdev_nvme_set_hotplug", 00:04:42.979 "bdev_nvme_set_options", 00:04:42.979 "bdev_passthru_delete", 00:04:42.979 "bdev_passthru_create", 00:04:42.979 "bdev_lvol_set_parent_bdev", 00:04:42.979 "bdev_lvol_set_parent", 00:04:42.979 "bdev_lvol_check_shallow_copy", 00:04:42.979 "bdev_lvol_start_shallow_copy", 00:04:42.979 "bdev_lvol_grow_lvstore", 00:04:42.979 "bdev_lvol_get_lvols", 00:04:42.979 "bdev_lvol_get_lvstores", 00:04:42.979 "bdev_lvol_delete", 00:04:42.979 "bdev_lvol_set_read_only", 00:04:42.979 "bdev_lvol_resize", 00:04:42.979 "bdev_lvol_decouple_parent", 00:04:42.979 "bdev_lvol_inflate", 00:04:42.979 "bdev_lvol_rename", 00:04:42.979 "bdev_lvol_clone_bdev", 00:04:42.979 "bdev_lvol_clone", 00:04:42.979 "bdev_lvol_snapshot", 00:04:42.979 "bdev_lvol_create", 00:04:42.979 "bdev_lvol_delete_lvstore", 00:04:42.979 "bdev_lvol_rename_lvstore", 00:04:42.979 "bdev_lvol_create_lvstore", 00:04:42.979 "bdev_raid_set_options", 00:04:42.979 "bdev_raid_remove_base_bdev", 00:04:42.979 "bdev_raid_add_base_bdev", 00:04:42.979 "bdev_raid_delete", 00:04:42.979 "bdev_raid_create", 00:04:42.979 "bdev_raid_get_bdevs", 00:04:42.979 "bdev_error_inject_error", 00:04:42.979 "bdev_error_delete", 00:04:42.979 "bdev_error_create", 00:04:42.979 "bdev_split_delete", 00:04:42.979 "bdev_split_create", 00:04:42.979 "bdev_delay_delete", 00:04:42.979 "bdev_delay_create", 00:04:42.979 "bdev_delay_update_latency", 00:04:42.979 "bdev_zone_block_delete", 00:04:42.979 "bdev_zone_block_create", 00:04:42.979 "blobfs_create", 00:04:42.979 "blobfs_detect", 00:04:42.979 "blobfs_set_cache_size", 00:04:42.979 "bdev_aio_delete", 00:04:42.979 "bdev_aio_rescan", 00:04:42.979 "bdev_aio_create", 00:04:42.979 "bdev_ftl_set_property", 00:04:42.979 "bdev_ftl_get_properties", 00:04:42.979 "bdev_ftl_get_stats", 00:04:42.979 "bdev_ftl_unmap", 00:04:42.979 "bdev_ftl_unload", 00:04:42.979 "bdev_ftl_delete", 00:04:42.979 "bdev_ftl_load", 00:04:42.979 "bdev_ftl_create", 00:04:42.979 "bdev_virtio_attach_controller", 00:04:42.979 "bdev_virtio_scsi_get_devices", 00:04:42.979 "bdev_virtio_detach_controller", 00:04:42.979 "bdev_virtio_blk_set_hotplug", 00:04:42.979 "bdev_iscsi_delete", 00:04:42.979 "bdev_iscsi_create", 00:04:42.979 "bdev_iscsi_set_options", 00:04:42.979 "accel_error_inject_error", 00:04:42.980 "ioat_scan_accel_module", 00:04:42.980 "dsa_scan_accel_module", 00:04:42.980 "iaa_scan_accel_module", 00:04:42.980 "vfu_virtio_create_fs_endpoint", 00:04:42.980 "vfu_virtio_create_scsi_endpoint", 00:04:42.980 "vfu_virtio_scsi_remove_target", 00:04:42.980 "vfu_virtio_scsi_add_target", 00:04:42.980 "vfu_virtio_create_blk_endpoint", 00:04:42.980 "vfu_virtio_delete_endpoint", 00:04:42.980 "keyring_file_remove_key", 00:04:42.980 "keyring_file_add_key", 00:04:42.980 "keyring_linux_set_options", 00:04:42.980 "fsdev_aio_delete", 00:04:42.980 "fsdev_aio_create", 00:04:42.980 "iscsi_get_histogram", 00:04:42.980 "iscsi_enable_histogram", 00:04:42.980 "iscsi_set_options", 00:04:42.980 "iscsi_get_auth_groups", 00:04:42.980 "iscsi_auth_group_remove_secret", 00:04:42.980 "iscsi_auth_group_add_secret", 00:04:42.980 "iscsi_delete_auth_group", 00:04:42.980 "iscsi_create_auth_group", 00:04:42.980 "iscsi_set_discovery_auth", 00:04:42.980 "iscsi_get_options", 00:04:42.980 "iscsi_target_node_request_logout", 00:04:42.980 "iscsi_target_node_set_redirect", 00:04:42.980 "iscsi_target_node_set_auth", 00:04:42.980 "iscsi_target_node_add_lun", 00:04:42.980 "iscsi_get_stats", 00:04:42.980 "iscsi_get_connections", 00:04:42.980 "iscsi_portal_group_set_auth", 00:04:42.980 "iscsi_start_portal_group", 00:04:42.980 "iscsi_delete_portal_group", 00:04:42.980 "iscsi_create_portal_group", 00:04:42.980 "iscsi_get_portal_groups", 00:04:42.980 "iscsi_delete_target_node", 00:04:42.980 "iscsi_target_node_remove_pg_ig_maps", 00:04:42.980 "iscsi_target_node_add_pg_ig_maps", 00:04:42.980 "iscsi_create_target_node", 00:04:42.980 "iscsi_get_target_nodes", 00:04:42.980 "iscsi_delete_initiator_group", 00:04:42.980 "iscsi_initiator_group_remove_initiators", 00:04:42.980 "iscsi_initiator_group_add_initiators", 00:04:42.980 "iscsi_create_initiator_group", 00:04:42.980 "iscsi_get_initiator_groups", 00:04:42.980 "nvmf_set_crdt", 00:04:42.980 "nvmf_set_config", 00:04:42.980 "nvmf_set_max_subsystems", 00:04:42.980 "nvmf_stop_mdns_prr", 00:04:42.980 "nvmf_publish_mdns_prr", 00:04:42.980 "nvmf_subsystem_get_listeners", 00:04:42.980 "nvmf_subsystem_get_qpairs", 00:04:42.980 "nvmf_subsystem_get_controllers", 00:04:42.980 "nvmf_get_stats", 00:04:42.980 "nvmf_get_transports", 00:04:42.980 "nvmf_create_transport", 00:04:42.980 "nvmf_get_targets", 00:04:42.980 "nvmf_delete_target", 00:04:42.980 "nvmf_create_target", 00:04:42.980 "nvmf_subsystem_allow_any_host", 00:04:42.980 "nvmf_subsystem_set_keys", 00:04:42.980 "nvmf_subsystem_remove_host", 00:04:42.980 "nvmf_subsystem_add_host", 00:04:42.980 "nvmf_ns_remove_host", 00:04:42.980 "nvmf_ns_add_host", 00:04:42.980 "nvmf_subsystem_remove_ns", 00:04:42.980 "nvmf_subsystem_set_ns_ana_group", 00:04:42.980 "nvmf_subsystem_add_ns", 00:04:42.980 "nvmf_subsystem_listener_set_ana_state", 00:04:42.980 "nvmf_discovery_get_referrals", 00:04:42.980 "nvmf_discovery_remove_referral", 00:04:42.980 "nvmf_discovery_add_referral", 00:04:42.980 "nvmf_subsystem_remove_listener", 00:04:42.980 "nvmf_subsystem_add_listener", 00:04:42.980 "nvmf_delete_subsystem", 00:04:42.980 "nvmf_create_subsystem", 00:04:42.980 "nvmf_get_subsystems", 00:04:42.980 "env_dpdk_get_mem_stats", 00:04:42.980 "nbd_get_disks", 00:04:42.980 "nbd_stop_disk", 00:04:42.980 "nbd_start_disk", 00:04:42.980 "ublk_recover_disk", 00:04:42.980 "ublk_get_disks", 00:04:42.980 "ublk_stop_disk", 00:04:42.980 "ublk_start_disk", 00:04:42.980 "ublk_destroy_target", 00:04:42.980 "ublk_create_target", 00:04:42.980 "virtio_blk_create_transport", 00:04:42.980 "virtio_blk_get_transports", 00:04:42.980 "vhost_controller_set_coalescing", 00:04:42.980 "vhost_get_controllers", 00:04:42.980 "vhost_delete_controller", 00:04:42.980 "vhost_create_blk_controller", 00:04:42.980 "vhost_scsi_controller_remove_target", 00:04:42.980 "vhost_scsi_controller_add_target", 00:04:42.980 "vhost_start_scsi_controller", 00:04:42.980 "vhost_create_scsi_controller", 00:04:42.980 "thread_set_cpumask", 00:04:42.980 "scheduler_set_options", 00:04:42.980 "framework_get_governor", 00:04:42.980 "framework_get_scheduler", 00:04:42.980 "framework_set_scheduler", 00:04:42.980 "framework_get_reactors", 00:04:42.980 "thread_get_io_channels", 00:04:42.980 "thread_get_pollers", 00:04:42.980 "thread_get_stats", 00:04:42.980 "framework_monitor_context_switch", 00:04:42.980 "spdk_kill_instance", 00:04:42.980 "log_enable_timestamps", 00:04:42.980 "log_get_flags", 00:04:42.980 "log_clear_flag", 00:04:42.980 "log_set_flag", 00:04:42.980 "log_get_level", 00:04:42.980 "log_set_level", 00:04:42.980 "log_get_print_level", 00:04:42.980 "log_set_print_level", 00:04:42.980 "framework_enable_cpumask_locks", 00:04:42.980 "framework_disable_cpumask_locks", 00:04:42.980 "framework_wait_init", 00:04:42.980 "framework_start_init", 00:04:42.980 "scsi_get_devices", 00:04:42.980 "bdev_get_histogram", 00:04:42.980 "bdev_enable_histogram", 00:04:42.980 "bdev_set_qos_limit", 00:04:42.980 "bdev_set_qd_sampling_period", 00:04:42.980 "bdev_get_bdevs", 00:04:42.980 "bdev_reset_iostat", 00:04:42.980 "bdev_get_iostat", 00:04:42.980 "bdev_examine", 00:04:42.980 "bdev_wait_for_examine", 00:04:42.980 "bdev_set_options", 00:04:42.980 "accel_get_stats", 00:04:42.980 "accel_set_options", 00:04:42.980 "accel_set_driver", 00:04:42.980 "accel_crypto_key_destroy", 00:04:42.980 "accel_crypto_keys_get", 00:04:42.980 "accel_crypto_key_create", 00:04:42.980 "accel_assign_opc", 00:04:42.980 "accel_get_module_info", 00:04:42.980 "accel_get_opc_assignments", 00:04:42.980 "vmd_rescan", 00:04:42.980 "vmd_remove_device", 00:04:42.980 "vmd_enable", 00:04:42.980 "sock_get_default_impl", 00:04:42.980 "sock_set_default_impl", 00:04:42.980 "sock_impl_set_options", 00:04:42.980 "sock_impl_get_options", 00:04:42.980 "iobuf_get_stats", 00:04:42.980 "iobuf_set_options", 00:04:42.980 "keyring_get_keys", 00:04:42.980 "vfu_tgt_set_base_path", 00:04:42.980 "framework_get_pci_devices", 00:04:42.980 "framework_get_config", 00:04:42.980 "framework_get_subsystems", 00:04:42.980 "fsdev_set_opts", 00:04:42.980 "fsdev_get_opts", 00:04:42.980 "trace_get_info", 00:04:42.980 "trace_get_tpoint_group_mask", 00:04:42.980 "trace_disable_tpoint_group", 00:04:42.980 "trace_enable_tpoint_group", 00:04:42.980 "trace_clear_tpoint_mask", 00:04:42.980 "trace_set_tpoint_mask", 00:04:42.980 "notify_get_notifications", 00:04:42.980 "notify_get_types", 00:04:42.980 "spdk_get_version", 00:04:42.980 "rpc_get_methods" 00:04:42.980 ] 00:04:42.980 18:12:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:42.980 18:12:36 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:42.980 18:12:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.980 18:12:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:42.980 18:12:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 228745 00:04:42.980 18:12:36 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 228745 ']' 00:04:42.980 18:12:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 228745 00:04:42.980 18:12:36 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:42.980 18:12:36 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.980 18:12:36 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 228745 00:04:42.980 18:12:36 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.980 18:12:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.980 18:12:36 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 228745' 00:04:42.980 killing process with pid 228745 00:04:42.980 18:12:36 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 228745 00:04:42.980 18:12:36 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 228745 00:04:43.549 00:04:43.549 real 0m1.688s 00:04:43.549 user 0m3.091s 00:04:43.549 sys 0m0.488s 00:04:43.549 18:12:36 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.549 18:12:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.549 ************************************ 00:04:43.549 END TEST spdkcli_tcp 00:04:43.549 ************************************ 00:04:43.549 18:12:36 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:43.549 18:12:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.549 18:12:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.549 18:12:36 -- common/autotest_common.sh@10 -- # set +x 00:04:43.549 ************************************ 00:04:43.549 START TEST dpdk_mem_utility 00:04:43.549 ************************************ 00:04:43.549 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:43.549 * Looking for test storage... 00:04:43.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:43.549 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:43.549 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:04:43.549 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:43.549 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.549 18:12:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:43.549 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.549 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:43.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.549 --rc genhtml_branch_coverage=1 00:04:43.549 --rc genhtml_function_coverage=1 00:04:43.549 --rc genhtml_legend=1 00:04:43.549 --rc geninfo_all_blocks=1 00:04:43.549 --rc geninfo_unexecuted_blocks=1 00:04:43.549 00:04:43.549 ' 00:04:43.549 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:43.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.549 --rc genhtml_branch_coverage=1 00:04:43.549 --rc genhtml_function_coverage=1 00:04:43.549 --rc genhtml_legend=1 00:04:43.549 --rc geninfo_all_blocks=1 00:04:43.549 --rc geninfo_unexecuted_blocks=1 00:04:43.549 00:04:43.549 ' 00:04:43.549 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:43.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.549 --rc genhtml_branch_coverage=1 00:04:43.549 --rc genhtml_function_coverage=1 00:04:43.549 --rc genhtml_legend=1 00:04:43.549 --rc geninfo_all_blocks=1 00:04:43.549 --rc geninfo_unexecuted_blocks=1 00:04:43.549 00:04:43.549 ' 00:04:43.549 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:43.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.549 --rc genhtml_branch_coverage=1 00:04:43.549 --rc genhtml_function_coverage=1 00:04:43.549 --rc genhtml_legend=1 00:04:43.549 --rc geninfo_all_blocks=1 00:04:43.549 --rc geninfo_unexecuted_blocks=1 00:04:43.549 00:04:43.549 ' 00:04:43.549 18:12:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:43.549 18:12:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=229062 00:04:43.549 18:12:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 229062 00:04:43.549 18:12:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.549 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 229062 ']' 00:04:43.549 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.549 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.549 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.549 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.549 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.808 [2024-10-08 18:12:36.876703] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:04:43.808 [2024-10-08 18:12:36.876755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229062 ] 00:04:43.808 [2024-10-08 18:12:36.944721] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.808 [2024-10-08 18:12:37.024641] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.378 18:12:37 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.378 18:12:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:44.378 18:12:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:44.378 18:12:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:44.378 18:12:37 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.378 18:12:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:44.637 { 00:04:44.637 "filename": "/tmp/spdk_mem_dump.txt" 00:04:44.637 } 00:04:44.637 18:12:37 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.637 18:12:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:44.637 DPDK memory size 860.000000 MiB in 1 heap(s) 00:04:44.637 1 heaps totaling size 860.000000 MiB 00:04:44.637 size: 860.000000 MiB heap id: 0 00:04:44.637 end heaps---------- 00:04:44.637 9 mempools totaling size 642.649841 MiB 00:04:44.637 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:44.637 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:44.637 size: 92.545471 MiB name: bdev_io_229062 00:04:44.637 size: 51.011292 MiB name: evtpool_229062 00:04:44.637 size: 50.003479 MiB name: msgpool_229062 00:04:44.637 size: 36.509338 MiB name: fsdev_io_229062 00:04:44.637 size: 21.763794 MiB name: PDU_Pool 00:04:44.637 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:44.637 size: 0.026123 MiB name: Session_Pool 00:04:44.637 end mempools------- 00:04:44.637 6 memzones totaling size 4.142822 MiB 00:04:44.637 size: 1.000366 MiB name: RG_ring_0_229062 00:04:44.637 size: 1.000366 MiB name: RG_ring_1_229062 00:04:44.637 size: 1.000366 MiB name: RG_ring_4_229062 00:04:44.637 size: 1.000366 MiB name: RG_ring_5_229062 00:04:44.637 size: 0.125366 MiB name: RG_ring_2_229062 00:04:44.637 size: 0.015991 MiB name: RG_ring_3_229062 00:04:44.637 end memzones------- 00:04:44.637 18:12:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:44.637 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:04:44.637 list of free elements. size: 13.984680 MiB 00:04:44.637 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:44.637 element at address: 0x200000800000 with size: 1.996948 MiB 00:04:44.638 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:04:44.638 element at address: 0x20001be00000 with size: 0.999878 MiB 00:04:44.638 element at address: 0x200034a00000 with size: 0.994446 MiB 00:04:44.638 element at address: 0x200009600000 with size: 0.959839 MiB 00:04:44.638 element at address: 0x200015e00000 with size: 0.954285 MiB 00:04:44.638 element at address: 0x20001c000000 with size: 0.936584 MiB 00:04:44.638 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:44.638 element at address: 0x20001d800000 with size: 0.582886 MiB 00:04:44.638 element at address: 0x200003e00000 with size: 0.495422 MiB 00:04:44.638 element at address: 0x20000d800000 with size: 0.490723 MiB 00:04:44.638 element at address: 0x20001c200000 with size: 0.485657 MiB 00:04:44.638 element at address: 0x200007000000 with size: 0.481934 MiB 00:04:44.638 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:04:44.638 element at address: 0x200003a00000 with size: 0.355042 MiB 00:04:44.638 list of standard malloc elements. size: 199.218628 MiB 00:04:44.638 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:04:44.638 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:04:44.638 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:04:44.638 element at address: 0x20001befff80 with size: 1.000122 MiB 00:04:44.638 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:04:44.638 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:44.638 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:04:44.638 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:44.638 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:04:44.638 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:44.638 element at address: 0x200003a5ae40 with size: 0.000183 MiB 00:04:44.638 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:44.638 element at address: 0x200003a5f300 with size: 0.000183 MiB 00:04:44.638 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:04:44.638 element at address: 0x200003aff940 with size: 0.000183 MiB 00:04:44.638 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:44.638 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:04:44.638 element at address: 0x200003eff000 with size: 0.000183 MiB 00:04:44.638 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x20000707b600 with size: 0.000183 MiB 00:04:44.638 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:04:44.638 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:04:44.638 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:04:44.638 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:04:44.638 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:04:44.638 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:04:44.638 element at address: 0x20001d895380 with size: 0.000183 MiB 00:04:44.638 element at address: 0x20001d895440 with size: 0.000183 MiB 00:04:44.638 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:04:44.638 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:04:44.638 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:04:44.638 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:04:44.638 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:04:44.638 list of memzone associated elements. size: 646.796692 MiB 00:04:44.638 element at address: 0x20001d895500 with size: 211.416748 MiB 00:04:44.638 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:44.638 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:04:44.638 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:44.638 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:04:44.638 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_229062_0 00:04:44.638 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:44.638 associated memzone info: size: 48.002930 MiB name: MP_evtpool_229062_0 00:04:44.638 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:44.638 associated memzone info: size: 48.002930 MiB name: MP_msgpool_229062_0 00:04:44.638 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:04:44.638 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_229062_0 00:04:44.638 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:04:44.638 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:44.638 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:04:44.638 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:44.638 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:44.638 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_229062 00:04:44.638 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:44.638 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_229062 00:04:44.638 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:44.638 associated memzone info: size: 1.007996 MiB name: MP_evtpool_229062 00:04:44.638 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:04:44.638 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:44.638 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:04:44.638 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:44.638 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:04:44.638 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:44.638 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:04:44.638 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:44.638 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:44.638 associated memzone info: size: 1.000366 MiB name: RG_ring_0_229062 00:04:44.638 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:44.638 associated memzone info: size: 1.000366 MiB name: RG_ring_1_229062 00:04:44.638 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:04:44.638 associated memzone info: size: 1.000366 MiB name: RG_ring_4_229062 00:04:44.638 element at address: 0x200034afe940 with size: 1.000488 MiB 00:04:44.638 associated memzone info: size: 1.000366 MiB name: RG_ring_5_229062 00:04:44.638 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:04:44.638 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_229062 00:04:44.638 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:04:44.638 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_229062 00:04:44.638 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:04:44.638 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:44.638 element at address: 0x20000707b780 with size: 0.500488 MiB 00:04:44.638 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:44.638 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:04:44.638 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:44.638 element at address: 0x200003a5f3c0 with size: 0.125488 MiB 00:04:44.638 associated memzone info: size: 0.125366 MiB name: RG_ring_2_229062 00:04:44.638 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:04:44.638 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:44.638 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:04:44.638 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:44.638 element at address: 0x200003a5b100 with size: 0.016113 MiB 00:04:44.638 associated memzone info: size: 0.015991 MiB name: RG_ring_3_229062 00:04:44.638 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:04:44.638 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:44.638 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:44.638 associated memzone info: size: 0.000183 MiB name: MP_msgpool_229062 00:04:44.638 element at address: 0x200003affa00 with size: 0.000305 MiB 00:04:44.638 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_229062 00:04:44.638 element at address: 0x200003a5af00 with size: 0.000305 MiB 00:04:44.638 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_229062 00:04:44.638 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:04:44.638 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:44.638 18:12:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:44.638 18:12:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 229062 00:04:44.638 18:12:37 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 229062 ']' 00:04:44.638 18:12:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 229062 00:04:44.638 18:12:37 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:44.638 18:12:37 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:44.638 18:12:37 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 229062 00:04:44.638 18:12:37 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:44.638 18:12:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:44.638 18:12:37 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 229062' 00:04:44.638 killing process with pid 229062 00:04:44.638 18:12:37 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 229062 00:04:44.638 18:12:37 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 229062 00:04:44.897 00:04:44.897 real 0m1.540s 00:04:44.897 user 0m1.621s 00:04:44.897 sys 0m0.437s 00:04:44.897 18:12:38 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.897 18:12:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:44.897 ************************************ 00:04:44.897 END TEST dpdk_mem_utility 00:04:44.897 ************************************ 00:04:45.156 18:12:38 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:45.156 18:12:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.156 18:12:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.156 18:12:38 -- common/autotest_common.sh@10 -- # set +x 00:04:45.156 ************************************ 00:04:45.156 START TEST event 00:04:45.156 ************************************ 00:04:45.156 18:12:38 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:45.156 * Looking for test storage... 00:04:45.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:45.156 18:12:38 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:45.156 18:12:38 event -- common/autotest_common.sh@1681 -- # lcov --version 00:04:45.156 18:12:38 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:45.156 18:12:38 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:45.156 18:12:38 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.156 18:12:38 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.156 18:12:38 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.156 18:12:38 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.156 18:12:38 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.156 18:12:38 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.156 18:12:38 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.156 18:12:38 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.156 18:12:38 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.156 18:12:38 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.156 18:12:38 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.156 18:12:38 event -- scripts/common.sh@344 -- # case "$op" in 00:04:45.156 18:12:38 event -- scripts/common.sh@345 -- # : 1 00:04:45.156 18:12:38 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.156 18:12:38 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.156 18:12:38 event -- scripts/common.sh@365 -- # decimal 1 00:04:45.156 18:12:38 event -- scripts/common.sh@353 -- # local d=1 00:04:45.156 18:12:38 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.156 18:12:38 event -- scripts/common.sh@355 -- # echo 1 00:04:45.156 18:12:38 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.156 18:12:38 event -- scripts/common.sh@366 -- # decimal 2 00:04:45.156 18:12:38 event -- scripts/common.sh@353 -- # local d=2 00:04:45.156 18:12:38 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.156 18:12:38 event -- scripts/common.sh@355 -- # echo 2 00:04:45.156 18:12:38 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.156 18:12:38 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.156 18:12:38 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.156 18:12:38 event -- scripts/common.sh@368 -- # return 0 00:04:45.156 18:12:38 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.156 18:12:38 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:45.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.156 --rc genhtml_branch_coverage=1 00:04:45.156 --rc genhtml_function_coverage=1 00:04:45.156 --rc genhtml_legend=1 00:04:45.156 --rc geninfo_all_blocks=1 00:04:45.156 --rc geninfo_unexecuted_blocks=1 00:04:45.156 00:04:45.156 ' 00:04:45.156 18:12:38 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:45.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.156 --rc genhtml_branch_coverage=1 00:04:45.156 --rc genhtml_function_coverage=1 00:04:45.156 --rc genhtml_legend=1 00:04:45.156 --rc geninfo_all_blocks=1 00:04:45.156 --rc geninfo_unexecuted_blocks=1 00:04:45.156 00:04:45.156 ' 00:04:45.156 18:12:38 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:45.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.156 --rc genhtml_branch_coverage=1 00:04:45.156 --rc genhtml_function_coverage=1 00:04:45.156 --rc genhtml_legend=1 00:04:45.156 --rc geninfo_all_blocks=1 00:04:45.156 --rc geninfo_unexecuted_blocks=1 00:04:45.157 00:04:45.157 ' 00:04:45.157 18:12:38 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:45.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.157 --rc genhtml_branch_coverage=1 00:04:45.157 --rc genhtml_function_coverage=1 00:04:45.157 --rc genhtml_legend=1 00:04:45.157 --rc geninfo_all_blocks=1 00:04:45.157 --rc geninfo_unexecuted_blocks=1 00:04:45.157 00:04:45.157 ' 00:04:45.157 18:12:38 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:45.157 18:12:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:45.157 18:12:38 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:45.157 18:12:38 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:45.157 18:12:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.157 18:12:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.157 ************************************ 00:04:45.157 START TEST event_perf 00:04:45.157 ************************************ 00:04:45.157 18:12:38 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:45.415 Running I/O for 1 seconds...[2024-10-08 18:12:38.492189] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:04:45.415 [2024-10-08 18:12:38.492258] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229459 ] 00:04:45.415 [2024-10-08 18:12:38.563433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:45.415 [2024-10-08 18:12:38.638116] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.415 [2024-10-08 18:12:38.638226] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.415 [2024-10-08 18:12:38.638331] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.415 Running I/O for 1 seconds...[2024-10-08 18:12:38.638332] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:04:46.793 00:04:46.793 lcore 0: 205345 00:04:46.793 lcore 1: 205343 00:04:46.793 lcore 2: 205342 00:04:46.793 lcore 3: 205344 00:04:46.793 done. 00:04:46.793 00:04:46.793 real 0m1.237s 00:04:46.793 user 0m4.149s 00:04:46.793 sys 0m0.085s 00:04:46.793 18:12:39 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.793 18:12:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:46.793 ************************************ 00:04:46.793 END TEST event_perf 00:04:46.793 ************************************ 00:04:46.793 18:12:39 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:46.793 18:12:39 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:46.793 18:12:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.793 18:12:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.793 ************************************ 00:04:46.793 START TEST event_reactor 00:04:46.793 ************************************ 00:04:46.793 18:12:39 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:46.793 [2024-10-08 18:12:39.797170] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:04:46.793 [2024-10-08 18:12:39.797243] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229665 ] 00:04:46.793 [2024-10-08 18:12:39.866024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.793 [2024-10-08 18:12:39.937533] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.730 test_start 00:04:47.730 oneshot 00:04:47.730 tick 100 00:04:47.730 tick 100 00:04:47.730 tick 250 00:04:47.730 tick 100 00:04:47.730 tick 100 00:04:47.730 tick 250 00:04:47.730 tick 100 00:04:47.730 tick 500 00:04:47.730 tick 100 00:04:47.730 tick 100 00:04:47.730 tick 250 00:04:47.730 tick 100 00:04:47.730 tick 100 00:04:47.730 test_end 00:04:47.730 00:04:47.730 real 0m1.233s 00:04:47.730 user 0m1.153s 00:04:47.730 sys 0m0.076s 00:04:47.730 18:12:41 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.730 18:12:41 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:47.730 ************************************ 00:04:47.730 END TEST event_reactor 00:04:47.730 ************************************ 00:04:47.730 18:12:41 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:47.730 18:12:41 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:47.730 18:12:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.730 18:12:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.989 ************************************ 00:04:47.989 START TEST event_reactor_perf 00:04:47.989 ************************************ 00:04:47.989 18:12:41 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:47.989 [2024-10-08 18:12:41.099138] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:04:47.989 [2024-10-08 18:12:41.099214] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229877 ] 00:04:47.989 [2024-10-08 18:12:41.169771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.989 [2024-10-08 18:12:41.242001] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.367 test_start 00:04:49.367 test_end 00:04:49.367 Performance: 515736 events per second 00:04:49.367 00:04:49.367 real 0m1.233s 00:04:49.367 user 0m1.140s 00:04:49.367 sys 0m0.089s 00:04:49.367 18:12:42 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.367 18:12:42 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.367 ************************************ 00:04:49.367 END TEST event_reactor_perf 00:04:49.367 ************************************ 00:04:49.367 18:12:42 event -- event/event.sh@49 -- # uname -s 00:04:49.367 18:12:42 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:49.367 18:12:42 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:49.367 18:12:42 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.367 18:12:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.367 18:12:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.367 ************************************ 00:04:49.367 START TEST event_scheduler 00:04:49.367 ************************************ 00:04:49.367 18:12:42 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:49.367 * Looking for test storage... 00:04:49.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:49.367 18:12:42 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:49.367 18:12:42 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:04:49.367 18:12:42 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:49.367 18:12:42 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.367 18:12:42 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:49.367 18:12:42 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.367 18:12:42 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:49.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.367 --rc genhtml_branch_coverage=1 00:04:49.367 --rc genhtml_function_coverage=1 00:04:49.367 --rc genhtml_legend=1 00:04:49.367 --rc geninfo_all_blocks=1 00:04:49.367 --rc geninfo_unexecuted_blocks=1 00:04:49.367 00:04:49.367 ' 00:04:49.367 18:12:42 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:49.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.367 --rc genhtml_branch_coverage=1 00:04:49.367 --rc genhtml_function_coverage=1 00:04:49.367 --rc genhtml_legend=1 00:04:49.367 --rc geninfo_all_blocks=1 00:04:49.367 --rc geninfo_unexecuted_blocks=1 00:04:49.367 00:04:49.367 ' 00:04:49.367 18:12:42 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:49.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.367 --rc genhtml_branch_coverage=1 00:04:49.367 --rc genhtml_function_coverage=1 00:04:49.367 --rc genhtml_legend=1 00:04:49.367 --rc geninfo_all_blocks=1 00:04:49.367 --rc geninfo_unexecuted_blocks=1 00:04:49.367 00:04:49.367 ' 00:04:49.367 18:12:42 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:49.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.367 --rc genhtml_branch_coverage=1 00:04:49.367 --rc genhtml_function_coverage=1 00:04:49.367 --rc genhtml_legend=1 00:04:49.367 --rc geninfo_all_blocks=1 00:04:49.367 --rc geninfo_unexecuted_blocks=1 00:04:49.367 00:04:49.367 ' 00:04:49.367 18:12:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:49.367 18:12:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=230232 00:04:49.367 18:12:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:49.367 18:12:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.367 18:12:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 230232 00:04:49.367 18:12:42 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 230232 ']' 00:04:49.367 18:12:42 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.367 18:12:42 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.367 18:12:42 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.367 18:12:42 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.367 18:12:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.367 [2024-10-08 18:12:42.609544] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:04:49.367 [2024-10-08 18:12:42.609594] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid230232 ] 00:04:49.367 [2024-10-08 18:12:42.676914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:49.626 [2024-10-08 18:12:42.756987] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.626 [2024-10-08 18:12:42.757096] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.626 [2024-10-08 18:12:42.757186] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:49.626 [2024-10-08 18:12:42.757187] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:04:50.194 18:12:43 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.194 18:12:43 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:50.194 18:12:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:50.194 18:12:43 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.194 18:12:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.194 [2024-10-08 18:12:43.456009] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:50.194 [2024-10-08 18:12:43.456031] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:50.194 [2024-10-08 18:12:43.456040] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:50.194 [2024-10-08 18:12:43.456046] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:50.194 [2024-10-08 18:12:43.456051] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:50.194 18:12:43 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.194 18:12:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:50.194 18:12:43 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.194 18:12:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.498 [2024-10-08 18:12:43.528975] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:50.498 18:12:43 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.498 18:12:43 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:50.498 18:12:43 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.498 18:12:43 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.498 18:12:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.498 ************************************ 00:04:50.498 START TEST scheduler_create_thread 00:04:50.498 ************************************ 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.498 2 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.498 3 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.498 4 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.498 5 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.498 6 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.498 7 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.498 8 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.498 9 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.498 10 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.498 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.511 18:12:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.511 18:12:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:51.511 18:12:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.511 18:12:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.890 18:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.890 18:12:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:52.890 18:12:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:52.890 18:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.890 18:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.827 18:12:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.827 00:04:53.827 real 0m3.381s 00:04:53.827 user 0m0.023s 00:04:53.827 sys 0m0.006s 00:04:53.827 18:12:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.827 18:12:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.827 ************************************ 00:04:53.827 END TEST scheduler_create_thread 00:04:53.827 ************************************ 00:04:53.827 18:12:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:53.827 18:12:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 230232 00:04:53.827 18:12:46 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 230232 ']' 00:04:53.827 18:12:46 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 230232 00:04:53.827 18:12:46 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:53.827 18:12:46 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.827 18:12:46 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 230232 00:04:53.827 18:12:47 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:53.827 18:12:47 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:53.827 18:12:47 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 230232' 00:04:53.827 killing process with pid 230232 00:04:53.827 18:12:47 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 230232 00:04:53.827 18:12:47 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 230232 00:04:54.086 [2024-10-08 18:12:47.328948] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:54.345 00:04:54.345 real 0m5.172s 00:04:54.345 user 0m10.600s 00:04:54.345 sys 0m0.408s 00:04:54.345 18:12:47 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.345 18:12:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.345 ************************************ 00:04:54.345 END TEST event_scheduler 00:04:54.345 ************************************ 00:04:54.345 18:12:47 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:54.345 18:12:47 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:54.345 18:12:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.345 18:12:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.345 18:12:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.345 ************************************ 00:04:54.345 START TEST app_repeat 00:04:54.345 ************************************ 00:04:54.345 18:12:47 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:54.345 18:12:47 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.345 18:12:47 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.345 18:12:47 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:54.345 18:12:47 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.345 18:12:47 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:54.345 18:12:47 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:54.345 18:12:47 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:54.345 18:12:47 event.app_repeat -- event/event.sh@19 -- # repeat_pid=231130 00:04:54.345 18:12:47 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:54.345 18:12:47 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.345 18:12:47 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 231130' 00:04:54.345 Process app_repeat pid: 231130 00:04:54.345 18:12:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:54.345 18:12:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:54.345 spdk_app_start Round 0 00:04:54.345 18:12:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 231130 /var/tmp/spdk-nbd.sock 00:04:54.345 18:12:47 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 231130 ']' 00:04:54.345 18:12:47 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.345 18:12:47 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.345 18:12:47 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.345 18:12:47 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.345 18:12:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.605 [2024-10-08 18:12:47.673587] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:04:54.605 [2024-10-08 18:12:47.673642] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231130 ] 00:04:54.605 [2024-10-08 18:12:47.742706] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.605 [2024-10-08 18:12:47.813709] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.605 [2024-10-08 18:12:47.813711] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.605 18:12:47 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.605 18:12:47 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:54.605 18:12:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.864 Malloc0 00:04:54.864 18:12:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.123 Malloc1 00:04:55.123 18:12:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.123 18:12:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.123 18:12:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.123 18:12:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:55.123 18:12:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.123 18:12:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:55.123 18:12:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.123 18:12:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.123 18:12:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.123 18:12:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:55.123 18:12:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.123 18:12:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:55.123 18:12:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:55.123 18:12:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:55.123 18:12:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.123 18:12:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:55.382 /dev/nbd0 00:04:55.382 18:12:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:55.382 18:12:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:55.382 18:12:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:55.382 18:12:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:55.382 18:12:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:55.382 18:12:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:55.382 18:12:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:55.382 18:12:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:55.382 18:12:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:55.382 18:12:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:55.382 18:12:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.382 1+0 records in 00:04:55.382 1+0 records out 00:04:55.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195226 s, 21.0 MB/s 00:04:55.382 18:12:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.382 18:12:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:55.382 18:12:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.382 18:12:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:55.382 18:12:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:55.382 18:12:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.382 18:12:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.382 18:12:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:55.642 /dev/nbd1 00:04:55.642 18:12:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:55.642 18:12:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:55.642 18:12:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:55.642 18:12:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:55.642 18:12:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:55.642 18:12:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:55.642 18:12:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:55.642 18:12:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:55.642 18:12:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:55.642 18:12:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:55.642 18:12:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.642 1+0 records in 00:04:55.642 1+0 records out 00:04:55.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200621 s, 20.4 MB/s 00:04:55.642 18:12:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.642 18:12:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:55.642 18:12:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.642 18:12:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:55.642 18:12:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:55.642 18:12:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.642 18:12:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.642 18:12:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.642 18:12:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.642 18:12:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:55.901 { 00:04:55.901 "nbd_device": "/dev/nbd0", 00:04:55.901 "bdev_name": "Malloc0" 00:04:55.901 }, 00:04:55.901 { 00:04:55.901 "nbd_device": "/dev/nbd1", 00:04:55.901 "bdev_name": "Malloc1" 00:04:55.901 } 00:04:55.901 ]' 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:55.901 { 00:04:55.901 "nbd_device": "/dev/nbd0", 00:04:55.901 "bdev_name": "Malloc0" 00:04:55.901 }, 00:04:55.901 { 00:04:55.901 "nbd_device": "/dev/nbd1", 00:04:55.901 "bdev_name": "Malloc1" 00:04:55.901 } 00:04:55.901 ]' 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:55.901 /dev/nbd1' 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:55.901 /dev/nbd1' 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:55.901 256+0 records in 00:04:55.901 256+0 records out 00:04:55.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103381 s, 101 MB/s 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:55.901 256+0 records in 00:04:55.901 256+0 records out 00:04:55.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141854 s, 73.9 MB/s 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:55.901 256+0 records in 00:04:55.901 256+0 records out 00:04:55.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146306 s, 71.7 MB/s 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.901 18:12:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.160 18:12:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.160 18:12:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.160 18:12:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.160 18:12:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.160 18:12:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.160 18:12:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.160 18:12:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.160 18:12:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.160 18:12:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.160 18:12:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:56.418 18:12:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:56.418 18:12:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:56.418 18:12:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:56.418 18:12:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.418 18:12:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.418 18:12:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:56.418 18:12:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.418 18:12:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.418 18:12:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.418 18:12:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.418 18:12:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.676 18:12:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:56.676 18:12:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:56.676 18:12:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.677 18:12:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:56.677 18:12:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:56.677 18:12:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.677 18:12:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:56.677 18:12:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:56.677 18:12:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:56.677 18:12:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:56.677 18:12:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:56.677 18:12:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:56.677 18:12:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.936 18:12:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:56.936 [2024-10-08 18:12:50.225363] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.195 [2024-10-08 18:12:50.292696] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.195 [2024-10-08 18:12:50.292697] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.195 [2024-10-08 18:12:50.333283] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:57.195 [2024-10-08 18:12:50.333326] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.480 18:12:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.480 18:12:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:00.480 spdk_app_start Round 1 00:05:00.480 18:12:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 231130 /var/tmp/spdk-nbd.sock 00:05:00.480 18:12:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 231130 ']' 00:05:00.480 18:12:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.480 18:12:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.480 18:12:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.480 18:12:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.481 18:12:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.481 18:12:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.481 18:12:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:00.481 18:12:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.481 Malloc0 00:05:00.481 18:12:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.481 Malloc1 00:05:00.481 18:12:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.481 18:12:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.481 18:12:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.481 18:12:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.481 18:12:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.481 18:12:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.481 18:12:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.481 18:12:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.481 18:12:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.481 18:12:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.481 18:12:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.481 18:12:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.481 18:12:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:00.481 18:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.481 18:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.481 18:12:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.739 /dev/nbd0 00:05:00.739 18:12:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.739 18:12:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.739 18:12:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:00.739 18:12:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:00.739 18:12:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:00.739 18:12:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:00.739 18:12:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:00.739 18:12:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:00.739 18:12:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:00.739 18:12:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:00.739 18:12:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.739 1+0 records in 00:05:00.739 1+0 records out 00:05:00.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000113844 s, 36.0 MB/s 00:05:00.739 18:12:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.739 18:12:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:00.739 18:12:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.739 18:12:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:00.739 18:12:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:00.739 18:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.739 18:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.739 18:12:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.997 /dev/nbd1 00:05:00.997 18:12:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:00.997 18:12:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:00.997 18:12:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:00.998 18:12:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:00.998 18:12:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:00.998 18:12:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:00.998 18:12:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:00.998 18:12:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:00.998 18:12:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:00.998 18:12:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:00.998 18:12:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.998 1+0 records in 00:05:00.998 1+0 records out 00:05:00.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229529 s, 17.8 MB/s 00:05:00.998 18:12:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.998 18:12:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:00.998 18:12:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.998 18:12:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:00.998 18:12:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:00.998 18:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.998 18:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.998 18:12:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.998 18:12:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.998 18:12:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.256 18:12:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.256 { 00:05:01.256 "nbd_device": "/dev/nbd0", 00:05:01.256 "bdev_name": "Malloc0" 00:05:01.256 }, 00:05:01.256 { 00:05:01.256 "nbd_device": "/dev/nbd1", 00:05:01.256 "bdev_name": "Malloc1" 00:05:01.256 } 00:05:01.256 ]' 00:05:01.256 18:12:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.256 { 00:05:01.256 "nbd_device": "/dev/nbd0", 00:05:01.256 "bdev_name": "Malloc0" 00:05:01.256 }, 00:05:01.256 { 00:05:01.256 "nbd_device": "/dev/nbd1", 00:05:01.256 "bdev_name": "Malloc1" 00:05:01.256 } 00:05:01.256 ]' 00:05:01.256 18:12:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.256 18:12:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.256 /dev/nbd1' 00:05:01.256 18:12:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.256 /dev/nbd1' 00:05:01.256 18:12:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.256 18:12:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.256 18:12:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.256 18:12:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.256 18:12:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.256 18:12:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.256 18:12:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.256 18:12:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.256 18:12:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.257 256+0 records in 00:05:01.257 256+0 records out 00:05:01.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107032 s, 98.0 MB/s 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.257 256+0 records in 00:05:01.257 256+0 records out 00:05:01.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137475 s, 76.3 MB/s 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.257 256+0 records in 00:05:01.257 256+0 records out 00:05:01.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150818 s, 69.5 MB/s 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.257 18:12:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.515 18:12:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.516 18:12:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.516 18:12:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.516 18:12:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.516 18:12:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.516 18:12:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.516 18:12:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.516 18:12:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.516 18:12:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.516 18:12:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.775 18:12:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.775 18:12:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.775 18:12:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.775 18:12:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.775 18:12:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.775 18:12:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.775 18:12:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.775 18:12:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.775 18:12:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.775 18:12:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.775 18:12:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.775 18:12:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.775 18:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.775 18:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.033 18:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.033 18:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.033 18:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.033 18:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.033 18:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.033 18:12:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.033 18:12:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.033 18:12:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.033 18:12:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.033 18:12:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.033 18:12:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.292 [2024-10-08 18:12:55.520522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.292 [2024-10-08 18:12:55.585801] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.292 [2024-10-08 18:12:55.585803] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.551 [2024-10-08 18:12:55.627156] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.551 [2024-10-08 18:12:55.627195] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.084 18:12:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.084 18:12:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:05.084 spdk_app_start Round 2 00:05:05.084 18:12:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 231130 /var/tmp/spdk-nbd.sock 00:05:05.084 18:12:58 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 231130 ']' 00:05:05.084 18:12:58 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.084 18:12:58 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.084 18:12:58 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.084 18:12:58 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.084 18:12:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.343 18:12:58 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:05.343 18:12:58 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:05.343 18:12:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.602 Malloc0 00:05:05.602 18:12:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.861 Malloc1 00:05:05.861 18:12:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.861 18:12:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.861 18:12:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.861 18:12:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.861 18:12:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.861 18:12:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.861 18:12:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.861 18:12:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.861 18:12:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.861 18:12:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.861 18:12:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.861 18:12:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.861 18:12:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.861 18:12:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.861 18:12:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.861 18:12:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.861 /dev/nbd0 00:05:06.120 18:12:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.121 18:12:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.121 18:12:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:06.121 18:12:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:06.121 18:12:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:06.121 18:12:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:06.121 18:12:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:06.121 18:12:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:06.121 18:12:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:06.121 18:12:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:06.121 18:12:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.121 1+0 records in 00:05:06.121 1+0 records out 00:05:06.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188076 s, 21.8 MB/s 00:05:06.121 18:12:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.121 18:12:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:06.121 18:12:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.121 18:12:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:06.121 18:12:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:06.121 18:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.121 18:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.121 18:12:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.121 /dev/nbd1 00:05:06.121 18:12:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.380 18:12:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.380 18:12:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:06.380 18:12:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:06.380 18:12:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:06.380 18:12:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:06.380 18:12:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:06.380 18:12:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:06.380 18:12:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:06.380 18:12:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:06.380 18:12:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.380 1+0 records in 00:05:06.380 1+0 records out 00:05:06.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183035 s, 22.4 MB/s 00:05:06.380 18:12:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.380 18:12:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:06.380 18:12:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.380 18:12:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:06.380 18:12:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:06.380 18:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.380 18:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.380 18:12:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.380 18:12:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.380 18:12:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.380 18:12:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.380 { 00:05:06.380 "nbd_device": "/dev/nbd0", 00:05:06.380 "bdev_name": "Malloc0" 00:05:06.380 }, 00:05:06.380 { 00:05:06.380 "nbd_device": "/dev/nbd1", 00:05:06.380 "bdev_name": "Malloc1" 00:05:06.380 } 00:05:06.380 ]' 00:05:06.380 18:12:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.380 { 00:05:06.380 "nbd_device": "/dev/nbd0", 00:05:06.380 "bdev_name": "Malloc0" 00:05:06.380 }, 00:05:06.380 { 00:05:06.380 "nbd_device": "/dev/nbd1", 00:05:06.380 "bdev_name": "Malloc1" 00:05:06.380 } 00:05:06.380 ]' 00:05:06.380 18:12:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.380 18:12:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.380 /dev/nbd1' 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.640 /dev/nbd1' 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.640 256+0 records in 00:05:06.640 256+0 records out 00:05:06.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106526 s, 98.4 MB/s 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.640 256+0 records in 00:05:06.640 256+0 records out 00:05:06.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138225 s, 75.9 MB/s 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.640 256+0 records in 00:05:06.640 256+0 records out 00:05:06.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149408 s, 70.2 MB/s 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.640 18:12:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.899 18:12:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.899 18:12:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.899 18:12:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.899 18:12:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.899 18:12:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.899 18:12:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.899 18:12:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.899 18:12:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.899 18:12:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.899 18:12:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.899 18:13:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.899 18:13:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.899 18:13:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.899 18:13:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.899 18:13:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.899 18:13:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.899 18:13:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.899 18:13:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.899 18:13:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.899 18:13:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.899 18:13:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.161 18:13:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.161 18:13:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.161 18:13:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.161 18:13:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.161 18:13:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.161 18:13:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.161 18:13:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.161 18:13:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.161 18:13:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.161 18:13:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.161 18:13:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.161 18:13:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.161 18:13:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.419 18:13:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.678 [2024-10-08 18:13:00.857414] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.678 [2024-10-08 18:13:00.925544] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.678 [2024-10-08 18:13:00.925545] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.678 [2024-10-08 18:13:00.966251] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.678 [2024-10-08 18:13:00.966292] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:10.967 18:13:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 231130 /var/tmp/spdk-nbd.sock 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 231130 ']' 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:10.967 18:13:03 event.app_repeat -- event/event.sh@39 -- # killprocess 231130 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 231130 ']' 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 231130 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 231130 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 231130' 00:05:10.967 killing process with pid 231130 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@969 -- # kill 231130 00:05:10.967 18:13:03 event.app_repeat -- common/autotest_common.sh@974 -- # wait 231130 00:05:10.967 spdk_app_start is called in Round 0. 00:05:10.967 Shutdown signal received, stop current app iteration 00:05:10.967 Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 reinitialization... 00:05:10.967 spdk_app_start is called in Round 1. 00:05:10.967 Shutdown signal received, stop current app iteration 00:05:10.967 Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 reinitialization... 00:05:10.967 spdk_app_start is called in Round 2. 00:05:10.967 Shutdown signal received, stop current app iteration 00:05:10.967 Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 reinitialization... 00:05:10.967 spdk_app_start is called in Round 3. 00:05:10.967 Shutdown signal received, stop current app iteration 00:05:10.967 18:13:04 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:10.967 18:13:04 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:10.967 00:05:10.967 real 0m16.462s 00:05:10.967 user 0m35.915s 00:05:10.967 sys 0m2.583s 00:05:10.967 18:13:04 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.967 18:13:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.967 ************************************ 00:05:10.967 END TEST app_repeat 00:05:10.967 ************************************ 00:05:10.967 18:13:04 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:10.967 18:13:04 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:10.967 18:13:04 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.967 18:13:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.967 18:13:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.967 ************************************ 00:05:10.967 START TEST cpu_locks 00:05:10.967 ************************************ 00:05:10.967 18:13:04 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:10.967 * Looking for test storage... 00:05:10.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:10.967 18:13:04 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:10.967 18:13:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:10.967 18:13:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:11.227 18:13:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.227 18:13:04 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:11.227 18:13:04 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.227 18:13:04 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:11.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.227 --rc genhtml_branch_coverage=1 00:05:11.227 --rc genhtml_function_coverage=1 00:05:11.227 --rc genhtml_legend=1 00:05:11.227 --rc geninfo_all_blocks=1 00:05:11.227 --rc geninfo_unexecuted_blocks=1 00:05:11.227 00:05:11.227 ' 00:05:11.227 18:13:04 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:11.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.227 --rc genhtml_branch_coverage=1 00:05:11.227 --rc genhtml_function_coverage=1 00:05:11.227 --rc genhtml_legend=1 00:05:11.227 --rc geninfo_all_blocks=1 00:05:11.227 --rc geninfo_unexecuted_blocks=1 00:05:11.227 00:05:11.227 ' 00:05:11.227 18:13:04 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:11.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.227 --rc genhtml_branch_coverage=1 00:05:11.227 --rc genhtml_function_coverage=1 00:05:11.227 --rc genhtml_legend=1 00:05:11.227 --rc geninfo_all_blocks=1 00:05:11.227 --rc geninfo_unexecuted_blocks=1 00:05:11.227 00:05:11.227 ' 00:05:11.227 18:13:04 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:11.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.227 --rc genhtml_branch_coverage=1 00:05:11.227 --rc genhtml_function_coverage=1 00:05:11.227 --rc genhtml_legend=1 00:05:11.227 --rc geninfo_all_blocks=1 00:05:11.227 --rc geninfo_unexecuted_blocks=1 00:05:11.227 00:05:11.227 ' 00:05:11.227 18:13:04 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:11.227 18:13:04 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:11.227 18:13:04 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:11.227 18:13:04 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:11.227 18:13:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.227 18:13:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.227 18:13:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.227 ************************************ 00:05:11.227 START TEST default_locks 00:05:11.227 ************************************ 00:05:11.227 18:13:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:11.227 18:13:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=234145 00:05:11.227 18:13:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 234145 00:05:11.227 18:13:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.227 18:13:04 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 234145 ']' 00:05:11.227 18:13:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.227 18:13:04 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.227 18:13:04 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.227 18:13:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.227 18:13:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.227 [2024-10-08 18:13:04.439815] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:11.227 [2024-10-08 18:13:04.439856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234145 ] 00:05:11.227 [2024-10-08 18:13:04.507203] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.487 [2024-10-08 18:13:04.585918] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.055 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.055 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:12.055 18:13:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 234145 00:05:12.055 18:13:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 234145 00:05:12.055 18:13:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.314 lslocks: write error 00:05:12.314 18:13:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 234145 00:05:12.314 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 234145 ']' 00:05:12.314 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 234145 00:05:12.314 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:12.314 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:12.314 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 234145 00:05:12.314 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:12.314 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:12.314 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 234145' 00:05:12.314 killing process with pid 234145 00:05:12.314 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 234145 00:05:12.314 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 234145 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 234145 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 234145 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 234145 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 234145 ']' 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (234145) - No such process 00:05:12.574 ERROR: process (pid: 234145) is no longer running 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:12.574 00:05:12.574 real 0m1.387s 00:05:12.574 user 0m1.455s 00:05:12.574 sys 0m0.454s 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.574 18:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.574 ************************************ 00:05:12.574 END TEST default_locks 00:05:12.574 ************************************ 00:05:12.574 18:13:05 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:12.574 18:13:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.574 18:13:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.574 18:13:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.574 ************************************ 00:05:12.574 START TEST default_locks_via_rpc 00:05:12.574 ************************************ 00:05:12.574 18:13:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:12.574 18:13:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=234452 00:05:12.574 18:13:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 234452 00:05:12.574 18:13:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.574 18:13:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 234452 ']' 00:05:12.574 18:13:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.574 18:13:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.574 18:13:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.574 18:13:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.574 18:13:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.834 [2024-10-08 18:13:05.897137] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:12.834 [2024-10-08 18:13:05.897182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234452 ] 00:05:12.834 [2024-10-08 18:13:05.965092] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.834 [2024-10-08 18:13:06.041509] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.410 18:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.410 18:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:13.410 18:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:13.410 18:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.410 18:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 234452 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 234452 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 234452 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 234452 ']' 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 234452 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.670 18:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 234452 00:05:13.929 18:13:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.929 18:13:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.929 18:13:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 234452' 00:05:13.929 killing process with pid 234452 00:05:13.929 18:13:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 234452 00:05:13.929 18:13:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 234452 00:05:14.189 00:05:14.189 real 0m1.504s 00:05:14.189 user 0m1.603s 00:05:14.189 sys 0m0.482s 00:05:14.189 18:13:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.189 18:13:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.189 ************************************ 00:05:14.189 END TEST default_locks_via_rpc 00:05:14.189 ************************************ 00:05:14.189 18:13:07 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:14.189 18:13:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.189 18:13:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.189 18:13:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.189 ************************************ 00:05:14.189 START TEST non_locking_app_on_locked_coremask 00:05:14.189 ************************************ 00:05:14.189 18:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:14.189 18:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=234835 00:05:14.189 18:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 234835 /var/tmp/spdk.sock 00:05:14.189 18:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.189 18:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 234835 ']' 00:05:14.189 18:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.189 18:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.189 18:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.190 18:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.190 18:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.190 [2024-10-08 18:13:07.472469] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:14.190 [2024-10-08 18:13:07.472513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234835 ] 00:05:14.449 [2024-10-08 18:13:07.539140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.449 [2024-10-08 18:13:07.618944] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.017 18:13:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.017 18:13:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:15.017 18:13:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:15.017 18:13:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=234901 00:05:15.017 18:13:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 234901 /var/tmp/spdk2.sock 00:05:15.017 18:13:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 234901 ']' 00:05:15.017 18:13:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.017 18:13:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.017 18:13:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.017 18:13:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.017 18:13:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.017 [2024-10-08 18:13:08.328792] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:15.017 [2024-10-08 18:13:08.328839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234901 ] 00:05:15.275 [2024-10-08 18:13:08.398054] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.275 [2024-10-08 18:13:08.398075] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.275 [2024-10-08 18:13:08.542041] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.213 18:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.213 18:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:16.213 18:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 234835 00:05:16.213 18:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.213 18:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 234835 00:05:16.472 lslocks: write error 00:05:16.472 18:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 234835 00:05:16.472 18:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 234835 ']' 00:05:16.472 18:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 234835 00:05:16.472 18:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:16.731 18:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.731 18:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 234835 00:05:16.731 18:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.731 18:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.731 18:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 234835' 00:05:16.731 killing process with pid 234835 00:05:16.731 18:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 234835 00:05:16.731 18:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 234835 00:05:17.300 18:13:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 234901 00:05:17.300 18:13:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 234901 ']' 00:05:17.300 18:13:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 234901 00:05:17.300 18:13:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:17.300 18:13:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:17.300 18:13:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 234901 00:05:17.300 18:13:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:17.300 18:13:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:17.300 18:13:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 234901' 00:05:17.300 killing process with pid 234901 00:05:17.300 18:13:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 234901 00:05:17.300 18:13:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 234901 00:05:17.560 00:05:17.560 real 0m3.441s 00:05:17.560 user 0m3.740s 00:05:17.560 sys 0m0.953s 00:05:17.560 18:13:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.560 18:13:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.560 ************************************ 00:05:17.560 END TEST non_locking_app_on_locked_coremask 00:05:17.560 ************************************ 00:05:17.820 18:13:10 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:17.820 18:13:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.820 18:13:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.820 18:13:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.820 ************************************ 00:05:17.820 START TEST locking_app_on_unlocked_coremask 00:05:17.820 ************************************ 00:05:17.820 18:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:17.820 18:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:17.820 18:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=235387 00:05:17.820 18:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 235387 /var/tmp/spdk.sock 00:05:17.820 18:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 235387 ']' 00:05:17.820 18:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.820 18:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.820 18:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.820 18:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.820 18:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.820 [2024-10-08 18:13:10.971677] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:17.820 [2024-10-08 18:13:10.971715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235387 ] 00:05:17.820 [2024-10-08 18:13:11.039644] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:17.820 [2024-10-08 18:13:11.039669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.820 [2024-10-08 18:13:11.118913] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.759 18:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.759 18:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:18.759 18:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=235619 00:05:18.759 18:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 235619 /var/tmp/spdk2.sock 00:05:18.759 18:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:18.759 18:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 235619 ']' 00:05:18.759 18:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.759 18:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.759 18:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.760 18:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.760 18:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.760 [2024-10-08 18:13:11.855653] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:18.760 [2024-10-08 18:13:11.855704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235619 ] 00:05:18.760 [2024-10-08 18:13:11.927581] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.760 [2024-10-08 18:13:12.071402] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.698 18:13:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.698 18:13:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:19.698 18:13:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 235619 00:05:19.698 18:13:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.698 18:13:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 235619 00:05:19.957 lslocks: write error 00:05:19.957 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 235387 00:05:19.957 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 235387 ']' 00:05:19.957 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 235387 00:05:19.957 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:19.957 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.957 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 235387 00:05:19.957 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.957 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.957 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 235387' 00:05:19.957 killing process with pid 235387 00:05:19.957 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 235387 00:05:19.957 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 235387 00:05:20.528 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 235619 00:05:20.528 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 235619 ']' 00:05:20.528 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 235619 00:05:20.528 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:20.528 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.528 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 235619 00:05:20.528 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.528 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.528 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 235619' 00:05:20.528 killing process with pid 235619 00:05:20.528 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 235619 00:05:20.528 18:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 235619 00:05:21.097 00:05:21.097 real 0m3.216s 00:05:21.097 user 0m3.478s 00:05:21.097 sys 0m0.909s 00:05:21.097 18:13:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.097 18:13:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.097 ************************************ 00:05:21.097 END TEST locking_app_on_unlocked_coremask 00:05:21.097 ************************************ 00:05:21.097 18:13:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:21.097 18:13:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.097 18:13:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.097 18:13:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.097 ************************************ 00:05:21.097 START TEST locking_app_on_locked_coremask 00:05:21.097 ************************************ 00:05:21.097 18:13:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:21.097 18:13:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=236014 00:05:21.097 18:13:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 236014 /var/tmp/spdk.sock 00:05:21.097 18:13:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.097 18:13:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 236014 ']' 00:05:21.097 18:13:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.097 18:13:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.097 18:13:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.097 18:13:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.097 18:13:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.097 [2024-10-08 18:13:14.264894] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:21.097 [2024-10-08 18:13:14.264937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236014 ] 00:05:21.097 [2024-10-08 18:13:14.334431] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.097 [2024-10-08 18:13:14.414786] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.036 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.036 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:22.037 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:22.037 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=236125 00:05:22.037 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 236125 /var/tmp/spdk2.sock 00:05:22.037 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:22.037 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 236125 /var/tmp/spdk2.sock 00:05:22.037 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:22.037 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.037 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:22.037 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.037 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 236125 /var/tmp/spdk2.sock 00:05:22.037 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 236125 ']' 00:05:22.037 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.037 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.037 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.037 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.037 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.037 [2024-10-08 18:13:15.119556] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:22.037 [2024-10-08 18:13:15.119605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236125 ] 00:05:22.037 [2024-10-08 18:13:15.192433] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 236014 has claimed it. 00:05:22.037 [2024-10-08 18:13:15.192470] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:22.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (236125) - No such process 00:05:22.607 ERROR: process (pid: 236125) is no longer running 00:05:22.607 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.607 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:22.607 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:22.607 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:22.607 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:22.607 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:22.607 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 236014 00:05:22.607 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 236014 00:05:22.607 18:13:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.179 lslocks: write error 00:05:23.179 18:13:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 236014 00:05:23.179 18:13:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 236014 ']' 00:05:23.179 18:13:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 236014 00:05:23.179 18:13:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:23.179 18:13:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.179 18:13:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 236014 00:05:23.179 18:13:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:23.179 18:13:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:23.179 18:13:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 236014' 00:05:23.179 killing process with pid 236014 00:05:23.179 18:13:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 236014 00:05:23.179 18:13:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 236014 00:05:23.439 00:05:23.439 real 0m2.423s 00:05:23.439 user 0m2.691s 00:05:23.439 sys 0m0.663s 00:05:23.439 18:13:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.439 18:13:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.439 ************************************ 00:05:23.439 END TEST locking_app_on_locked_coremask 00:05:23.439 ************************************ 00:05:23.439 18:13:16 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:23.439 18:13:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.439 18:13:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.439 18:13:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.439 ************************************ 00:05:23.439 START TEST locking_overlapped_coremask 00:05:23.439 ************************************ 00:05:23.439 18:13:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:23.439 18:13:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=236391 00:05:23.439 18:13:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 236391 /var/tmp/spdk.sock 00:05:23.439 18:13:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:23.439 18:13:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 236391 ']' 00:05:23.439 18:13:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.439 18:13:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.439 18:13:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.439 18:13:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.439 18:13:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.439 [2024-10-08 18:13:16.760712] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:23.439 [2024-10-08 18:13:16.760756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236391 ] 00:05:23.699 [2024-10-08 18:13:16.829973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.699 [2024-10-08 18:13:16.910784] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.699 [2024-10-08 18:13:16.910889] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.699 [2024-10-08 18:13:16.910889] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.638 18:13:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.638 18:13:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:24.638 18:13:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=236622 00:05:24.639 18:13:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:24.639 18:13:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 236622 /var/tmp/spdk2.sock 00:05:24.639 18:13:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:24.639 18:13:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 236622 /var/tmp/spdk2.sock 00:05:24.639 18:13:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:24.639 18:13:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.639 18:13:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:24.639 18:13:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.639 18:13:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 236622 /var/tmp/spdk2.sock 00:05:24.639 18:13:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 236622 ']' 00:05:24.639 18:13:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.639 18:13:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.639 18:13:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.639 18:13:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.639 18:13:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.639 [2024-10-08 18:13:17.643572] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:24.639 [2024-10-08 18:13:17.643621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236622 ] 00:05:24.639 [2024-10-08 18:13:17.719559] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 236391 has claimed it. 00:05:24.639 [2024-10-08 18:13:17.719594] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:25.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (236622) - No such process 00:05:25.207 ERROR: process (pid: 236622) is no longer running 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 236391 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 236391 ']' 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 236391 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 236391 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 236391' 00:05:25.207 killing process with pid 236391 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 236391 00:05:25.207 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 236391 00:05:25.466 00:05:25.466 real 0m1.960s 00:05:25.466 user 0m5.536s 00:05:25.466 sys 0m0.433s 00:05:25.466 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.466 18:13:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.466 ************************************ 00:05:25.466 END TEST locking_overlapped_coremask 00:05:25.466 ************************************ 00:05:25.466 18:13:18 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:25.466 18:13:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.466 18:13:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.466 18:13:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.466 ************************************ 00:05:25.466 START TEST locking_overlapped_coremask_via_rpc 00:05:25.466 ************************************ 00:05:25.466 18:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:25.466 18:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=236880 00:05:25.466 18:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 236880 /var/tmp/spdk.sock 00:05:25.466 18:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:25.466 18:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 236880 ']' 00:05:25.466 18:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.466 18:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.466 18:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.466 18:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.466 18:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.725 [2024-10-08 18:13:18.788494] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:25.725 [2024-10-08 18:13:18.788537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236880 ] 00:05:25.725 [2024-10-08 18:13:18.838810] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.725 [2024-10-08 18:13:18.838835] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.725 [2024-10-08 18:13:18.922394] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.725 [2024-10-08 18:13:18.922428] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.725 [2024-10-08 18:13:18.922429] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.985 18:13:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.985 18:13:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:25.985 18:13:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=236886 00:05:25.985 18:13:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 236886 /var/tmp/spdk2.sock 00:05:25.985 18:13:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:25.985 18:13:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 236886 ']' 00:05:25.985 18:13:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.985 18:13:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.985 18:13:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.985 18:13:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.985 18:13:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.985 [2024-10-08 18:13:19.192712] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:25.985 [2024-10-08 18:13:19.192755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236886 ] 00:05:25.985 [2024-10-08 18:13:19.267477] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.985 [2024-10-08 18:13:19.267508] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.245 [2024-10-08 18:13:19.418392] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.245 [2024-10-08 18:13:19.418477] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.245 [2024-10-08 18:13:19.418479] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.813 [2024-10-08 18:13:20.051456] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 236880 has claimed it. 00:05:26.813 request: 00:05:26.813 { 00:05:26.813 "method": "framework_enable_cpumask_locks", 00:05:26.813 "req_id": 1 00:05:26.813 } 00:05:26.813 Got JSON-RPC error response 00:05:26.813 response: 00:05:26.813 { 00:05:26.813 "code": -32603, 00:05:26.813 "message": "Failed to claim CPU core: 2" 00:05:26.813 } 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 236880 /var/tmp/spdk.sock 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 236880 ']' 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.813 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.072 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.072 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:27.072 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 236886 /var/tmp/spdk2.sock 00:05:27.073 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 236886 ']' 00:05:27.073 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.073 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.073 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.073 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.073 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.332 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.332 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:27.332 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:27.332 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:27.332 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:27.332 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:27.332 00:05:27.332 real 0m1.738s 00:05:27.332 user 0m0.862s 00:05:27.332 sys 0m0.146s 00:05:27.332 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.332 18:13:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.332 ************************************ 00:05:27.332 END TEST locking_overlapped_coremask_via_rpc 00:05:27.332 ************************************ 00:05:27.332 18:13:20 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:27.332 18:13:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 236880 ]] 00:05:27.332 18:13:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 236880 00:05:27.332 18:13:20 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 236880 ']' 00:05:27.332 18:13:20 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 236880 00:05:27.332 18:13:20 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:27.332 18:13:20 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.332 18:13:20 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 236880 00:05:27.332 18:13:20 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.332 18:13:20 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.332 18:13:20 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 236880' 00:05:27.332 killing process with pid 236880 00:05:27.332 18:13:20 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 236880 00:05:27.332 18:13:20 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 236880 00:05:27.591 18:13:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 236886 ]] 00:05:27.591 18:13:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 236886 00:05:27.591 18:13:20 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 236886 ']' 00:05:27.591 18:13:20 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 236886 00:05:27.591 18:13:20 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:27.591 18:13:20 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.591 18:13:20 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 236886 00:05:27.851 18:13:20 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:27.851 18:13:20 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:27.851 18:13:20 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 236886' 00:05:27.851 killing process with pid 236886 00:05:27.851 18:13:20 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 236886 00:05:27.851 18:13:20 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 236886 00:05:28.111 18:13:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:28.111 18:13:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:28.111 18:13:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 236880 ]] 00:05:28.111 18:13:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 236880 00:05:28.111 18:13:21 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 236880 ']' 00:05:28.111 18:13:21 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 236880 00:05:28.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (236880) - No such process 00:05:28.111 18:13:21 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 236880 is not found' 00:05:28.111 Process with pid 236880 is not found 00:05:28.111 18:13:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 236886 ]] 00:05:28.111 18:13:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 236886 00:05:28.111 18:13:21 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 236886 ']' 00:05:28.111 18:13:21 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 236886 00:05:28.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (236886) - No such process 00:05:28.111 18:13:21 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 236886 is not found' 00:05:28.111 Process with pid 236886 is not found 00:05:28.111 18:13:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:28.111 00:05:28.111 real 0m17.123s 00:05:28.111 user 0m29.211s 00:05:28.111 sys 0m5.029s 00:05:28.111 18:13:21 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.111 18:13:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.111 ************************************ 00:05:28.111 END TEST cpu_locks 00:05:28.111 ************************************ 00:05:28.111 00:05:28.111 real 0m43.074s 00:05:28.111 user 1m22.443s 00:05:28.111 sys 0m8.648s 00:05:28.111 18:13:21 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.111 18:13:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.111 ************************************ 00:05:28.111 END TEST event 00:05:28.111 ************************************ 00:05:28.111 18:13:21 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:28.111 18:13:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.111 18:13:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.111 18:13:21 -- common/autotest_common.sh@10 -- # set +x 00:05:28.111 ************************************ 00:05:28.111 START TEST thread 00:05:28.111 ************************************ 00:05:28.111 18:13:21 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:28.378 * Looking for test storage... 00:05:28.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:28.379 18:13:21 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:28.379 18:13:21 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:05:28.379 18:13:21 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:28.379 18:13:21 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:28.379 18:13:21 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.379 18:13:21 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.379 18:13:21 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.379 18:13:21 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.379 18:13:21 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.379 18:13:21 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.379 18:13:21 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.379 18:13:21 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.379 18:13:21 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.379 18:13:21 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.379 18:13:21 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.379 18:13:21 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:28.379 18:13:21 thread -- scripts/common.sh@345 -- # : 1 00:05:28.379 18:13:21 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.379 18:13:21 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.379 18:13:21 thread -- scripts/common.sh@365 -- # decimal 1 00:05:28.379 18:13:21 thread -- scripts/common.sh@353 -- # local d=1 00:05:28.379 18:13:21 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.379 18:13:21 thread -- scripts/common.sh@355 -- # echo 1 00:05:28.379 18:13:21 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.379 18:13:21 thread -- scripts/common.sh@366 -- # decimal 2 00:05:28.379 18:13:21 thread -- scripts/common.sh@353 -- # local d=2 00:05:28.379 18:13:21 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.379 18:13:21 thread -- scripts/common.sh@355 -- # echo 2 00:05:28.379 18:13:21 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.379 18:13:21 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.379 18:13:21 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.379 18:13:21 thread -- scripts/common.sh@368 -- # return 0 00:05:28.379 18:13:21 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.379 18:13:21 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:28.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.379 --rc genhtml_branch_coverage=1 00:05:28.379 --rc genhtml_function_coverage=1 00:05:28.379 --rc genhtml_legend=1 00:05:28.379 --rc geninfo_all_blocks=1 00:05:28.379 --rc geninfo_unexecuted_blocks=1 00:05:28.379 00:05:28.379 ' 00:05:28.379 18:13:21 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:28.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.379 --rc genhtml_branch_coverage=1 00:05:28.379 --rc genhtml_function_coverage=1 00:05:28.379 --rc genhtml_legend=1 00:05:28.379 --rc geninfo_all_blocks=1 00:05:28.379 --rc geninfo_unexecuted_blocks=1 00:05:28.379 00:05:28.379 ' 00:05:28.379 18:13:21 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:28.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.379 --rc genhtml_branch_coverage=1 00:05:28.379 --rc genhtml_function_coverage=1 00:05:28.379 --rc genhtml_legend=1 00:05:28.379 --rc geninfo_all_blocks=1 00:05:28.379 --rc geninfo_unexecuted_blocks=1 00:05:28.379 00:05:28.379 ' 00:05:28.379 18:13:21 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:28.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.379 --rc genhtml_branch_coverage=1 00:05:28.379 --rc genhtml_function_coverage=1 00:05:28.379 --rc genhtml_legend=1 00:05:28.379 --rc geninfo_all_blocks=1 00:05:28.379 --rc geninfo_unexecuted_blocks=1 00:05:28.379 00:05:28.379 ' 00:05:28.379 18:13:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:28.379 18:13:21 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:28.379 18:13:21 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.379 18:13:21 thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.379 ************************************ 00:05:28.379 START TEST thread_poller_perf 00:05:28.379 ************************************ 00:05:28.379 18:13:21 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:28.379 [2024-10-08 18:13:21.630828] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:28.379 [2024-10-08 18:13:21.630897] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237451 ] 00:05:28.379 [2024-10-08 18:13:21.700253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.639 [2024-10-08 18:13:21.771015] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.639 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:29.578 [2024-10-08T16:13:22.901Z] ====================================== 00:05:29.578 [2024-10-08T16:13:22.901Z] busy:2105112322 (cyc) 00:05:29.578 [2024-10-08T16:13:22.901Z] total_run_count: 418000 00:05:29.578 [2024-10-08T16:13:22.901Z] tsc_hz: 2100000000 (cyc) 00:05:29.578 [2024-10-08T16:13:22.901Z] ====================================== 00:05:29.578 [2024-10-08T16:13:22.901Z] poller_cost: 5036 (cyc), 2398 (nsec) 00:05:29.578 00:05:29.578 real 0m1.235s 00:05:29.578 user 0m1.153s 00:05:29.578 sys 0m0.078s 00:05:29.578 18:13:22 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.578 18:13:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.578 ************************************ 00:05:29.578 END TEST thread_poller_perf 00:05:29.578 ************************************ 00:05:29.578 18:13:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:29.578 18:13:22 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:29.578 18:13:22 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.578 18:13:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.838 ************************************ 00:05:29.838 START TEST thread_poller_perf 00:05:29.838 ************************************ 00:05:29.838 18:13:22 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:29.838 [2024-10-08 18:13:22.936442] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:29.838 [2024-10-08 18:13:22.936511] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237698 ] 00:05:29.838 [2024-10-08 18:13:23.006900] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.838 [2024-10-08 18:13:23.079480] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.838 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:31.218 [2024-10-08T16:13:24.541Z] ====================================== 00:05:31.218 [2024-10-08T16:13:24.541Z] busy:2101650906 (cyc) 00:05:31.218 [2024-10-08T16:13:24.541Z] total_run_count: 5545000 00:05:31.218 [2024-10-08T16:13:24.541Z] tsc_hz: 2100000000 (cyc) 00:05:31.218 [2024-10-08T16:13:24.541Z] ====================================== 00:05:31.218 [2024-10-08T16:13:24.541Z] poller_cost: 379 (cyc), 180 (nsec) 00:05:31.218 00:05:31.218 real 0m1.234s 00:05:31.218 user 0m1.144s 00:05:31.218 sys 0m0.085s 00:05:31.218 18:13:24 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.218 18:13:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.218 ************************************ 00:05:31.218 END TEST thread_poller_perf 00:05:31.218 ************************************ 00:05:31.218 18:13:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:31.218 00:05:31.218 real 0m2.781s 00:05:31.218 user 0m2.442s 00:05:31.218 sys 0m0.352s 00:05:31.218 18:13:24 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.218 18:13:24 thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.218 ************************************ 00:05:31.218 END TEST thread 00:05:31.218 ************************************ 00:05:31.218 18:13:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:31.218 18:13:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:31.218 18:13:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.218 18:13:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.218 18:13:24 -- common/autotest_common.sh@10 -- # set +x 00:05:31.218 ************************************ 00:05:31.218 START TEST app_cmdline 00:05:31.218 ************************************ 00:05:31.218 18:13:24 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:31.218 * Looking for test storage... 00:05:31.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:31.218 18:13:24 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:31.218 18:13:24 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:05:31.218 18:13:24 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:31.218 18:13:24 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.218 18:13:24 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:31.218 18:13:24 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.218 18:13:24 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:31.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.218 --rc genhtml_branch_coverage=1 00:05:31.218 --rc genhtml_function_coverage=1 00:05:31.218 --rc genhtml_legend=1 00:05:31.218 --rc geninfo_all_blocks=1 00:05:31.218 --rc geninfo_unexecuted_blocks=1 00:05:31.218 00:05:31.218 ' 00:05:31.218 18:13:24 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:31.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.218 --rc genhtml_branch_coverage=1 00:05:31.218 --rc genhtml_function_coverage=1 00:05:31.218 --rc genhtml_legend=1 00:05:31.218 --rc geninfo_all_blocks=1 00:05:31.218 --rc geninfo_unexecuted_blocks=1 00:05:31.218 00:05:31.218 ' 00:05:31.218 18:13:24 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:31.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.218 --rc genhtml_branch_coverage=1 00:05:31.218 --rc genhtml_function_coverage=1 00:05:31.218 --rc genhtml_legend=1 00:05:31.218 --rc geninfo_all_blocks=1 00:05:31.218 --rc geninfo_unexecuted_blocks=1 00:05:31.218 00:05:31.218 ' 00:05:31.218 18:13:24 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:31.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.218 --rc genhtml_branch_coverage=1 00:05:31.218 --rc genhtml_function_coverage=1 00:05:31.218 --rc genhtml_legend=1 00:05:31.218 --rc geninfo_all_blocks=1 00:05:31.218 --rc geninfo_unexecuted_blocks=1 00:05:31.218 00:05:31.218 ' 00:05:31.218 18:13:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:31.218 18:13:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=238000 00:05:31.218 18:13:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 238000 00:05:31.218 18:13:24 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:31.218 18:13:24 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 238000 ']' 00:05:31.218 18:13:24 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.218 18:13:24 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.218 18:13:24 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.218 18:13:24 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.218 18:13:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:31.218 [2024-10-08 18:13:24.484369] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:31.218 [2024-10-08 18:13:24.484427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid238000 ] 00:05:31.478 [2024-10-08 18:13:24.553449] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.478 [2024-10-08 18:13:24.624753] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.046 18:13:25 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.046 18:13:25 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:32.046 18:13:25 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:32.306 { 00:05:32.306 "version": "SPDK v25.01-pre git sha1 ba5b39cb2", 00:05:32.306 "fields": { 00:05:32.306 "major": 25, 00:05:32.306 "minor": 1, 00:05:32.306 "patch": 0, 00:05:32.306 "suffix": "-pre", 00:05:32.306 "commit": "ba5b39cb2" 00:05:32.306 } 00:05:32.306 } 00:05:32.306 18:13:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:32.306 18:13:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:32.306 18:13:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:32.306 18:13:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:32.306 18:13:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:32.306 18:13:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:32.306 18:13:25 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.306 18:13:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:32.306 18:13:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:32.306 18:13:25 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.306 18:13:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:32.306 18:13:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:32.306 18:13:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:32.306 18:13:25 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:32.306 18:13:25 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:32.306 18:13:25 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:32.306 18:13:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.306 18:13:25 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:32.306 18:13:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.307 18:13:25 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:32.307 18:13:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.307 18:13:25 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:32.307 18:13:25 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:32.307 18:13:25 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:32.580 request: 00:05:32.580 { 00:05:32.580 "method": "env_dpdk_get_mem_stats", 00:05:32.580 "req_id": 1 00:05:32.580 } 00:05:32.580 Got JSON-RPC error response 00:05:32.580 response: 00:05:32.580 { 00:05:32.580 "code": -32601, 00:05:32.580 "message": "Method not found" 00:05:32.580 } 00:05:32.580 18:13:25 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:32.580 18:13:25 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:32.580 18:13:25 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:32.580 18:13:25 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:32.580 18:13:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 238000 00:05:32.580 18:13:25 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 238000 ']' 00:05:32.580 18:13:25 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 238000 00:05:32.580 18:13:25 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:32.580 18:13:25 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.580 18:13:25 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 238000 00:05:32.580 18:13:25 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.580 18:13:25 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.580 18:13:25 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 238000' 00:05:32.580 killing process with pid 238000 00:05:32.580 18:13:25 app_cmdline -- common/autotest_common.sh@969 -- # kill 238000 00:05:32.580 18:13:25 app_cmdline -- common/autotest_common.sh@974 -- # wait 238000 00:05:32.853 00:05:32.853 real 0m1.890s 00:05:32.853 user 0m2.269s 00:05:32.853 sys 0m0.486s 00:05:32.853 18:13:26 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.853 18:13:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:32.853 ************************************ 00:05:32.853 END TEST app_cmdline 00:05:32.853 ************************************ 00:05:33.122 18:13:26 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:33.122 18:13:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.122 18:13:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.122 18:13:26 -- common/autotest_common.sh@10 -- # set +x 00:05:33.122 ************************************ 00:05:33.122 START TEST version 00:05:33.122 ************************************ 00:05:33.122 18:13:26 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:33.122 * Looking for test storage... 00:05:33.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:33.122 18:13:26 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:33.122 18:13:26 version -- common/autotest_common.sh@1681 -- # lcov --version 00:05:33.122 18:13:26 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:33.122 18:13:26 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:33.122 18:13:26 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.122 18:13:26 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.122 18:13:26 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.122 18:13:26 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.122 18:13:26 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.122 18:13:26 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.122 18:13:26 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.122 18:13:26 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.122 18:13:26 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.122 18:13:26 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.122 18:13:26 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.122 18:13:26 version -- scripts/common.sh@344 -- # case "$op" in 00:05:33.122 18:13:26 version -- scripts/common.sh@345 -- # : 1 00:05:33.122 18:13:26 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.122 18:13:26 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.122 18:13:26 version -- scripts/common.sh@365 -- # decimal 1 00:05:33.122 18:13:26 version -- scripts/common.sh@353 -- # local d=1 00:05:33.122 18:13:26 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.122 18:13:26 version -- scripts/common.sh@355 -- # echo 1 00:05:33.122 18:13:26 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.122 18:13:26 version -- scripts/common.sh@366 -- # decimal 2 00:05:33.122 18:13:26 version -- scripts/common.sh@353 -- # local d=2 00:05:33.122 18:13:26 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.122 18:13:26 version -- scripts/common.sh@355 -- # echo 2 00:05:33.122 18:13:26 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.122 18:13:26 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.122 18:13:26 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.122 18:13:26 version -- scripts/common.sh@368 -- # return 0 00:05:33.122 18:13:26 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.122 18:13:26 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:33.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.122 --rc genhtml_branch_coverage=1 00:05:33.122 --rc genhtml_function_coverage=1 00:05:33.122 --rc genhtml_legend=1 00:05:33.122 --rc geninfo_all_blocks=1 00:05:33.122 --rc geninfo_unexecuted_blocks=1 00:05:33.122 00:05:33.122 ' 00:05:33.122 18:13:26 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:33.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.122 --rc genhtml_branch_coverage=1 00:05:33.122 --rc genhtml_function_coverage=1 00:05:33.122 --rc genhtml_legend=1 00:05:33.122 --rc geninfo_all_blocks=1 00:05:33.122 --rc geninfo_unexecuted_blocks=1 00:05:33.122 00:05:33.122 ' 00:05:33.122 18:13:26 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:33.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.122 --rc genhtml_branch_coverage=1 00:05:33.122 --rc genhtml_function_coverage=1 00:05:33.122 --rc genhtml_legend=1 00:05:33.122 --rc geninfo_all_blocks=1 00:05:33.122 --rc geninfo_unexecuted_blocks=1 00:05:33.122 00:05:33.122 ' 00:05:33.122 18:13:26 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:33.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.122 --rc genhtml_branch_coverage=1 00:05:33.122 --rc genhtml_function_coverage=1 00:05:33.122 --rc genhtml_legend=1 00:05:33.122 --rc geninfo_all_blocks=1 00:05:33.122 --rc geninfo_unexecuted_blocks=1 00:05:33.122 00:05:33.122 ' 00:05:33.122 18:13:26 version -- app/version.sh@17 -- # get_header_version major 00:05:33.123 18:13:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:33.123 18:13:26 version -- app/version.sh@14 -- # cut -f2 00:05:33.123 18:13:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:33.123 18:13:26 version -- app/version.sh@17 -- # major=25 00:05:33.123 18:13:26 version -- app/version.sh@18 -- # get_header_version minor 00:05:33.123 18:13:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:33.123 18:13:26 version -- app/version.sh@14 -- # cut -f2 00:05:33.123 18:13:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:33.123 18:13:26 version -- app/version.sh@18 -- # minor=1 00:05:33.123 18:13:26 version -- app/version.sh@19 -- # get_header_version patch 00:05:33.123 18:13:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:33.123 18:13:26 version -- app/version.sh@14 -- # cut -f2 00:05:33.123 18:13:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:33.123 18:13:26 version -- app/version.sh@19 -- # patch=0 00:05:33.123 18:13:26 version -- app/version.sh@20 -- # get_header_version suffix 00:05:33.123 18:13:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:33.123 18:13:26 version -- app/version.sh@14 -- # cut -f2 00:05:33.123 18:13:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:33.123 18:13:26 version -- app/version.sh@20 -- # suffix=-pre 00:05:33.123 18:13:26 version -- app/version.sh@22 -- # version=25.1 00:05:33.123 18:13:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:33.123 18:13:26 version -- app/version.sh@28 -- # version=25.1rc0 00:05:33.123 18:13:26 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:33.123 18:13:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:33.391 18:13:26 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:33.391 18:13:26 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:33.391 00:05:33.391 real 0m0.232s 00:05:33.391 user 0m0.143s 00:05:33.391 sys 0m0.133s 00:05:33.391 18:13:26 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.391 18:13:26 version -- common/autotest_common.sh@10 -- # set +x 00:05:33.391 ************************************ 00:05:33.391 END TEST version 00:05:33.391 ************************************ 00:05:33.391 18:13:26 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:33.391 18:13:26 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:33.391 18:13:26 -- spdk/autotest.sh@194 -- # uname -s 00:05:33.391 18:13:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:33.391 18:13:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:33.391 18:13:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:33.391 18:13:26 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:33.391 18:13:26 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:33.391 18:13:26 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:33.391 18:13:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:33.391 18:13:26 -- common/autotest_common.sh@10 -- # set +x 00:05:33.391 18:13:26 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:33.391 18:13:26 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:33.391 18:13:26 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:33.391 18:13:26 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:33.391 18:13:26 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:33.391 18:13:26 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:33.392 18:13:26 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:33.392 18:13:26 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:33.392 18:13:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.392 18:13:26 -- common/autotest_common.sh@10 -- # set +x 00:05:33.392 ************************************ 00:05:33.392 START TEST nvmf_tcp 00:05:33.392 ************************************ 00:05:33.392 18:13:26 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:33.392 * Looking for test storage... 00:05:33.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:33.392 18:13:26 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:33.392 18:13:26 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:33.392 18:13:26 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:33.651 18:13:26 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.651 18:13:26 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:33.651 18:13:26 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.651 18:13:26 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:33.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.651 --rc genhtml_branch_coverage=1 00:05:33.651 --rc genhtml_function_coverage=1 00:05:33.651 --rc genhtml_legend=1 00:05:33.651 --rc geninfo_all_blocks=1 00:05:33.651 --rc geninfo_unexecuted_blocks=1 00:05:33.651 00:05:33.651 ' 00:05:33.651 18:13:26 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:33.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.651 --rc genhtml_branch_coverage=1 00:05:33.651 --rc genhtml_function_coverage=1 00:05:33.651 --rc genhtml_legend=1 00:05:33.651 --rc geninfo_all_blocks=1 00:05:33.651 --rc geninfo_unexecuted_blocks=1 00:05:33.651 00:05:33.651 ' 00:05:33.651 18:13:26 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:33.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.651 --rc genhtml_branch_coverage=1 00:05:33.651 --rc genhtml_function_coverage=1 00:05:33.651 --rc genhtml_legend=1 00:05:33.651 --rc geninfo_all_blocks=1 00:05:33.651 --rc geninfo_unexecuted_blocks=1 00:05:33.651 00:05:33.651 ' 00:05:33.651 18:13:26 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:33.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.651 --rc genhtml_branch_coverage=1 00:05:33.651 --rc genhtml_function_coverage=1 00:05:33.651 --rc genhtml_legend=1 00:05:33.651 --rc geninfo_all_blocks=1 00:05:33.651 --rc geninfo_unexecuted_blocks=1 00:05:33.651 00:05:33.651 ' 00:05:33.651 18:13:26 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:33.651 18:13:26 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:33.652 18:13:26 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:33.652 18:13:26 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:33.652 18:13:26 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.652 18:13:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.652 ************************************ 00:05:33.652 START TEST nvmf_target_core 00:05:33.652 ************************************ 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:33.652 * Looking for test storage... 00:05:33.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:33.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.652 --rc genhtml_branch_coverage=1 00:05:33.652 --rc genhtml_function_coverage=1 00:05:33.652 --rc genhtml_legend=1 00:05:33.652 --rc geninfo_all_blocks=1 00:05:33.652 --rc geninfo_unexecuted_blocks=1 00:05:33.652 00:05:33.652 ' 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:33.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.652 --rc genhtml_branch_coverage=1 00:05:33.652 --rc genhtml_function_coverage=1 00:05:33.652 --rc genhtml_legend=1 00:05:33.652 --rc geninfo_all_blocks=1 00:05:33.652 --rc geninfo_unexecuted_blocks=1 00:05:33.652 00:05:33.652 ' 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:33.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.652 --rc genhtml_branch_coverage=1 00:05:33.652 --rc genhtml_function_coverage=1 00:05:33.652 --rc genhtml_legend=1 00:05:33.652 --rc geninfo_all_blocks=1 00:05:33.652 --rc geninfo_unexecuted_blocks=1 00:05:33.652 00:05:33.652 ' 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:33.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.652 --rc genhtml_branch_coverage=1 00:05:33.652 --rc genhtml_function_coverage=1 00:05:33.652 --rc genhtml_legend=1 00:05:33.652 --rc geninfo_all_blocks=1 00:05:33.652 --rc geninfo_unexecuted_blocks=1 00:05:33.652 00:05:33.652 ' 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.652 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:33.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.913 18:13:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:33.913 ************************************ 00:05:33.913 START TEST nvmf_abort 00:05:33.913 ************************************ 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:33.913 * Looking for test storage... 00:05:33.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:33.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.913 --rc genhtml_branch_coverage=1 00:05:33.913 --rc genhtml_function_coverage=1 00:05:33.913 --rc genhtml_legend=1 00:05:33.913 --rc geninfo_all_blocks=1 00:05:33.913 --rc geninfo_unexecuted_blocks=1 00:05:33.913 00:05:33.913 ' 00:05:33.913 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:33.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.913 --rc genhtml_branch_coverage=1 00:05:33.914 --rc genhtml_function_coverage=1 00:05:33.914 --rc genhtml_legend=1 00:05:33.914 --rc geninfo_all_blocks=1 00:05:33.914 --rc geninfo_unexecuted_blocks=1 00:05:33.914 00:05:33.914 ' 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:33.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.914 --rc genhtml_branch_coverage=1 00:05:33.914 --rc genhtml_function_coverage=1 00:05:33.914 --rc genhtml_legend=1 00:05:33.914 --rc geninfo_all_blocks=1 00:05:33.914 --rc geninfo_unexecuted_blocks=1 00:05:33.914 00:05:33.914 ' 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:33.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.914 --rc genhtml_branch_coverage=1 00:05:33.914 --rc genhtml_function_coverage=1 00:05:33.914 --rc genhtml_legend=1 00:05:33.914 --rc geninfo_all_blocks=1 00:05:33.914 --rc geninfo_unexecuted_blocks=1 00:05:33.914 00:05:33.914 ' 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:33.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:33.914 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:34.175 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:34.175 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:34.175 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:34.175 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:40.754 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:40.754 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:40.754 Found net devices under 0000:86:00.0: cvl_0_0 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:40.754 Found net devices under 0000:86:00.1: cvl_0_1 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:40.754 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:40.755 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:40.755 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:40.755 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:40.755 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:40.755 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:40.755 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:40.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:40.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:05:40.755 00:05:40.755 --- 10.0.0.2 ping statistics --- 00:05:40.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:40.755 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:40.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:40.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:05:40.755 00:05:40.755 --- 10.0.0.1 ping statistics --- 00:05:40.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:40.755 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=241693 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 241693 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 241693 ']' 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.755 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:40.755 [2024-10-08 18:13:33.321509] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:40.755 [2024-10-08 18:13:33.321560] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:40.755 [2024-10-08 18:13:33.395324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:40.755 [2024-10-08 18:13:33.472391] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:40.755 [2024-10-08 18:13:33.472425] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:40.755 [2024-10-08 18:13:33.472433] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:40.755 [2024-10-08 18:13:33.472439] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:40.755 [2024-10-08 18:13:33.472444] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:40.755 [2024-10-08 18:13:33.473420] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.755 [2024-10-08 18:13:33.473452] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.755 [2024-10-08 18:13:33.473453] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.015 [2024-10-08 18:13:34.195944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.015 Malloc0 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.015 Delay0 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.015 [2024-10-08 18:13:34.275544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.015 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:41.275 [2024-10-08 18:13:34.403051] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:43.813 Initializing NVMe Controllers 00:05:43.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:43.813 controller IO queue size 128 less than required 00:05:43.813 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:43.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:43.813 Initialization complete. Launching workers. 00:05:43.813 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37696 00:05:43.813 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37757, failed to submit 62 00:05:43.813 success 37700, unsuccessful 57, failed 0 00:05:43.813 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:43.813 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.813 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:43.813 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.813 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:43.813 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:43.813 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:05:43.813 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:43.813 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:43.813 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:43.813 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:43.813 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:43.813 rmmod nvme_tcp 00:05:43.813 rmmod nvme_fabrics 00:05:43.813 rmmod nvme_keyring 00:05:43.813 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:43.813 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:43.813 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:43.813 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 241693 ']' 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 241693 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 241693 ']' 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 241693 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 241693 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 241693' 00:05:43.814 killing process with pid 241693 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 241693 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 241693 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:43.814 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.725 18:13:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:45.725 00:05:45.725 real 0m11.910s 00:05:45.725 user 0m13.634s 00:05:45.725 sys 0m5.532s 00:05:45.725 18:13:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.725 18:13:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:45.725 ************************************ 00:05:45.725 END TEST nvmf_abort 00:05:45.725 ************************************ 00:05:45.725 18:13:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:45.725 18:13:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:45.725 18:13:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.725 18:13:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:45.725 ************************************ 00:05:45.725 START TEST nvmf_ns_hotplug_stress 00:05:45.725 ************************************ 00:05:45.725 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:45.985 * Looking for test storage... 00:05:45.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:45.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.985 --rc genhtml_branch_coverage=1 00:05:45.985 --rc genhtml_function_coverage=1 00:05:45.985 --rc genhtml_legend=1 00:05:45.985 --rc geninfo_all_blocks=1 00:05:45.985 --rc geninfo_unexecuted_blocks=1 00:05:45.985 00:05:45.985 ' 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:45.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.985 --rc genhtml_branch_coverage=1 00:05:45.985 --rc genhtml_function_coverage=1 00:05:45.985 --rc genhtml_legend=1 00:05:45.985 --rc geninfo_all_blocks=1 00:05:45.985 --rc geninfo_unexecuted_blocks=1 00:05:45.985 00:05:45.985 ' 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:45.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.985 --rc genhtml_branch_coverage=1 00:05:45.985 --rc genhtml_function_coverage=1 00:05:45.985 --rc genhtml_legend=1 00:05:45.985 --rc geninfo_all_blocks=1 00:05:45.985 --rc geninfo_unexecuted_blocks=1 00:05:45.985 00:05:45.985 ' 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:45.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.985 --rc genhtml_branch_coverage=1 00:05:45.985 --rc genhtml_function_coverage=1 00:05:45.985 --rc genhtml_legend=1 00:05:45.985 --rc geninfo_all_blocks=1 00:05:45.985 --rc geninfo_unexecuted_blocks=1 00:05:45.985 00:05:45.985 ' 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.985 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:45.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:45.986 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:52.565 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:52.565 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:52.566 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:52.566 Found net devices under 0000:86:00.0: cvl_0_0 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:52.566 Found net devices under 0000:86:00.1: cvl_0_1 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:52.566 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:52.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:52.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:05:52.566 00:05:52.566 --- 10.0.0.2 ping statistics --- 00:05:52.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:52.566 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:52.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:52.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:05:52.566 00:05:52.566 --- 10.0.0.1 ping statistics --- 00:05:52.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:52.566 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=245942 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 245942 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 245942 ']' 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.566 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:52.566 [2024-10-08 18:13:45.301492] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:05:52.566 [2024-10-08 18:13:45.301538] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:52.566 [2024-10-08 18:13:45.373353] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.566 [2024-10-08 18:13:45.445587] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:52.567 [2024-10-08 18:13:45.445630] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:52.567 [2024-10-08 18:13:45.445637] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:52.567 [2024-10-08 18:13:45.445643] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:52.567 [2024-10-08 18:13:45.445652] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:52.567 [2024-10-08 18:13:45.446663] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.567 [2024-10-08 18:13:45.446693] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.567 [2024-10-08 18:13:45.446695] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.826 18:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.826 18:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:52.826 18:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:52.826 18:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:52.826 18:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:53.085 18:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:53.085 18:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:53.085 18:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:53.085 [2024-10-08 18:13:46.357648] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.085 18:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:53.345 18:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:53.605 [2024-10-08 18:13:46.752011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:53.605 18:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:53.864 18:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:53.864 Malloc0 00:05:53.865 18:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:54.125 Delay0 00:05:54.125 18:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.384 18:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:54.643 NULL1 00:05:54.643 18:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:54.902 18:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=246426 00:05:54.902 18:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:54.902 18:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:05:54.902 18:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.842 Read completed with error (sct=0, sc=11) 00:05:55.842 18:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.842 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.101 18:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:56.101 18:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:56.360 true 00:05:56.360 18:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:05:56.360 18:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.296 18:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.296 18:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:57.296 18:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:57.556 true 00:05:57.556 18:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:05:57.556 18:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.816 18:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.076 18:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:58.076 18:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:58.076 true 00:05:58.076 18:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:05:58.076 18:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.455 18:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.455 18:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:59.455 18:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:59.714 true 00:05:59.714 18:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:05:59.714 18:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.652 18:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.652 18:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:00.652 18:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:00.911 true 00:06:00.911 18:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:00.911 18:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.911 18:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.170 18:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:01.170 18:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:01.429 true 00:06:01.429 18:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:01.429 18:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.366 18:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.624 18:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:02.624 18:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:02.883 true 00:06:02.883 18:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:02.883 18:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.819 18:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.819 18:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:03.819 18:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:04.078 true 00:06:04.078 18:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:04.078 18:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.337 18:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.597 18:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:04.597 18:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:04.597 true 00:06:04.597 18:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:04.857 18:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.796 18:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.055 18:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:06.055 18:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:06.315 true 00:06:06.315 18:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:06.315 18:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.254 18:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.254 18:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:07.254 18:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:07.513 true 00:06:07.513 18:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:07.513 18:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.772 18:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.772 18:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:07.772 18:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:08.031 true 00:06:08.031 18:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:08.031 18:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.409 18:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.409 18:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:09.409 18:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:09.668 true 00:06:09.668 18:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:09.668 18:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.604 18:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.604 18:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:10.604 18:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:10.862 true 00:06:10.862 18:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:10.862 18:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.121 18:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.121 18:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:11.121 18:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:11.381 true 00:06:11.381 18:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:11.381 18:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.759 18:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.759 18:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:12.759 18:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:13.018 true 00:06:13.018 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:13.018 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.955 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.955 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:13.955 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:14.213 true 00:06:14.213 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:14.214 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.214 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.472 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:14.472 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:14.731 true 00:06:14.731 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:14.731 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.666 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.925 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:15.925 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:16.210 true 00:06:16.210 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:16.210 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.147 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.147 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:17.147 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:17.407 true 00:06:17.407 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:17.407 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.666 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.925 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:17.925 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:17.925 true 00:06:17.925 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:17.926 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.305 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.305 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:19.305 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:19.564 true 00:06:19.564 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:19.564 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.501 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.501 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.502 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:20.502 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:20.761 true 00:06:20.761 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:20.761 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.019 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.279 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:21.279 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:21.279 true 00:06:21.279 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:21.279 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.687 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.687 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:22.687 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:23.007 true 00:06:23.007 18:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:23.007 18:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.617 18:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.877 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:23.877 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:24.135 true 00:06:24.135 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:24.135 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.393 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.651 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:24.651 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:24.651 true 00:06:24.651 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:24.651 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.032 Initializing NVMe Controllers 00:06:26.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:26.032 Controller IO queue size 128, less than required. 00:06:26.032 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:26.032 Controller IO queue size 128, less than required. 00:06:26.032 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:26.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:26.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:26.032 Initialization complete. Launching workers. 00:06:26.032 ======================================================== 00:06:26.032 Latency(us) 00:06:26.032 Device Information : IOPS MiB/s Average min max 00:06:26.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2056.43 1.00 45217.04 2134.50 1013271.78 00:06:26.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18162.70 8.87 7046.94 2277.84 297728.46 00:06:26.032 ======================================================== 00:06:26.032 Total : 20219.13 9.87 10929.12 2134.50 1013271.78 00:06:26.032 00:06:26.032 18:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.032 18:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:26.032 18:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:26.291 true 00:06:26.291 18:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246426 00:06:26.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (246426) - No such process 00:06:26.291 18:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 246426 00:06:26.291 18:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.550 18:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:26.550 18:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:26.550 18:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:26.550 18:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:26.550 18:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:26.550 18:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:26.809 null0 00:06:26.809 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:26.809 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:26.809 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:27.067 null1 00:06:27.067 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.067 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.067 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:27.327 null2 00:06:27.327 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.327 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.327 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:27.327 null3 00:06:27.327 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.327 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.327 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:27.585 null4 00:06:27.585 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.585 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.585 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:27.844 null5 00:06:27.844 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.844 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.844 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:28.103 null6 00:06:28.103 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:28.103 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:28.103 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:28.103 null7 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:28.362 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 252047 252048 252051 252052 252054 252056 252058 252060 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:28.363 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:28.622 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.622 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.622 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:28.622 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.622 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.622 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:28.622 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.622 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.622 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:28.623 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.623 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.623 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:28.623 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.623 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.623 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:28.623 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.623 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.623 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:28.623 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.623 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.623 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:28.623 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.623 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.623 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:28.882 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:28.882 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:28.882 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.882 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:28.882 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:28.882 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:28.882 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:28.882 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:29.141 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.401 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.660 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:29.660 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.660 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.660 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.660 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.660 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:29.660 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:29.660 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.920 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:30.179 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.180 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.180 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.439 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:30.699 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:30.958 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.958 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:30.958 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.958 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.958 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:30.958 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:30.958 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.958 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.217 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:31.218 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:31.218 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:31.218 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:31.218 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.476 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:31.735 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:31.735 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:31.735 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.735 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:31.735 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:31.735 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:31.735 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:31.735 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:31.994 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.994 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.994 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:31.994 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.994 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.994 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.994 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:31.994 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.994 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:31.994 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.994 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.995 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:31.995 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.995 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.995 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:31.995 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.995 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.995 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:31.995 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.995 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.995 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:31.995 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.995 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.995 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:32.253 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:32.253 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:32.253 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.253 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:32.253 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:32.253 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:32.254 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:32.512 rmmod nvme_tcp 00:06:32.512 rmmod nvme_fabrics 00:06:32.512 rmmod nvme_keyring 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 245942 ']' 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 245942 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 245942 ']' 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 245942 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 245942 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 245942' 00:06:32.512 killing process with pid 245942 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 245942 00:06:32.512 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 245942 00:06:32.771 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:32.771 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:32.771 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:32.771 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:32.771 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:32.771 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:06:32.771 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:06:32.771 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:32.771 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:32.771 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.771 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:32.771 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.677 18:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:34.677 00:06:34.677 real 0m48.929s 00:06:34.677 user 3m17.358s 00:06:34.677 sys 0m15.609s 00:06:34.677 18:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.677 18:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:34.677 ************************************ 00:06:34.677 END TEST nvmf_ns_hotplug_stress 00:06:34.677 ************************************ 00:06:34.677 18:14:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:34.677 18:14:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:34.677 18:14:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.677 18:14:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:34.937 ************************************ 00:06:34.937 START TEST nvmf_delete_subsystem 00:06:34.937 ************************************ 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:34.937 * Looking for test storage... 00:06:34.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:34.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.937 --rc genhtml_branch_coverage=1 00:06:34.937 --rc genhtml_function_coverage=1 00:06:34.937 --rc genhtml_legend=1 00:06:34.937 --rc geninfo_all_blocks=1 00:06:34.937 --rc geninfo_unexecuted_blocks=1 00:06:34.937 00:06:34.937 ' 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:34.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.937 --rc genhtml_branch_coverage=1 00:06:34.937 --rc genhtml_function_coverage=1 00:06:34.937 --rc genhtml_legend=1 00:06:34.937 --rc geninfo_all_blocks=1 00:06:34.937 --rc geninfo_unexecuted_blocks=1 00:06:34.937 00:06:34.937 ' 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:34.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.937 --rc genhtml_branch_coverage=1 00:06:34.937 --rc genhtml_function_coverage=1 00:06:34.937 --rc genhtml_legend=1 00:06:34.937 --rc geninfo_all_blocks=1 00:06:34.937 --rc geninfo_unexecuted_blocks=1 00:06:34.937 00:06:34.937 ' 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:34.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.937 --rc genhtml_branch_coverage=1 00:06:34.937 --rc genhtml_function_coverage=1 00:06:34.937 --rc genhtml_legend=1 00:06:34.937 --rc geninfo_all_blocks=1 00:06:34.937 --rc geninfo_unexecuted_blocks=1 00:06:34.937 00:06:34.937 ' 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:34.937 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:34.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:34.938 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:41.521 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:41.521 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:41.521 Found net devices under 0000:86:00.0: cvl_0_0 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:41.521 Found net devices under 0000:86:00.1: cvl_0_1 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:41.521 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:41.521 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:41.521 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:41.521 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:41.521 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:41.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:41.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:06:41.522 00:06:41.522 --- 10.0.0.2 ping statistics --- 00:06:41.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.522 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:41.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:41.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:06:41.522 00:06:41.522 --- 10.0.0.1 ping statistics --- 00:06:41.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.522 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=256480 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 256480 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 256480 ']' 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.522 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:41.522 [2024-10-08 18:14:34.258148] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:06:41.522 [2024-10-08 18:14:34.258192] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.522 [2024-10-08 18:14:34.328266] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.522 [2024-10-08 18:14:34.399246] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:41.522 [2024-10-08 18:14:34.399295] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:41.522 [2024-10-08 18:14:34.399302] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:41.522 [2024-10-08 18:14:34.399308] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:41.522 [2024-10-08 18:14:34.399313] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:41.522 [2024-10-08 18:14:34.400113] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.522 [2024-10-08 18:14:34.400114] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.781 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.781 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:06:41.781 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:41.781 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:41.781 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.041 [2024-10-08 18:14:35.141281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.041 [2024-10-08 18:14:35.161489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.041 NULL1 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.041 Delay0 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=256689 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:42.041 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:42.041 [2024-10-08 18:14:35.263160] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:43.948 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:43.948 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.948 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Write completed with error (sct=0, sc=8) 00:06:44.207 starting I/O failed: -6 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Write completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 starting I/O failed: -6 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 starting I/O failed: -6 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 starting I/O failed: -6 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Write completed with error (sct=0, sc=8) 00:06:44.207 starting I/O failed: -6 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Write completed with error (sct=0, sc=8) 00:06:44.207 starting I/O failed: -6 00:06:44.207 Write completed with error (sct=0, sc=8) 00:06:44.207 Write completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 starting I/O failed: -6 00:06:44.207 Write completed with error (sct=0, sc=8) 00:06:44.207 Write completed with error (sct=0, sc=8) 00:06:44.207 Write completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 starting I/O failed: -6 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Write completed with error (sct=0, sc=8) 00:06:44.207 Write completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 starting I/O failed: -6 00:06:44.207 Write completed with error (sct=0, sc=8) 00:06:44.207 Write completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 starting I/O failed: -6 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 Read completed with error (sct=0, sc=8) 00:06:44.207 starting I/O failed: -6 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 starting I/O failed: -6 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 [2024-10-08 18:14:37.351793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2a750 is same with the state(6) to be set 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 starting I/O failed: -6 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 starting I/O failed: -6 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 starting I/O failed: -6 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 starting I/O failed: -6 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 starting I/O failed: -6 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 starting I/O failed: -6 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 starting I/O failed: -6 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 starting I/O failed: -6 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 starting I/O failed: -6 00:06:44.208 [2024-10-08 18:14:37.352138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1918000c00 is same with the state(6) to be set 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Write completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:44.208 Read completed with error (sct=0, sc=8) 00:06:45.145 [2024-10-08 18:14:38.316545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ba70 is same with the state(6) to be set 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 [2024-10-08 18:14:38.352696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f191800d640 is same with the state(6) to be set 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 [2024-10-08 18:14:38.354372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2a930 is same with the state(6) to be set 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.145 Read completed with error (sct=0, sc=8) 00:06:45.145 Write completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 [2024-10-08 18:14:38.354510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2a570 is same with the state(6) to be set 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Write completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Write completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Write completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Write completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Write completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Write completed with error (sct=0, sc=8) 00:06:45.146 Write completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 Read completed with error (sct=0, sc=8) 00:06:45.146 [2024-10-08 18:14:38.355338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2a390 is same with the state(6) to be set 00:06:45.146 Initializing NVMe Controllers 00:06:45.146 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:45.146 Controller IO queue size 128, less than required. 00:06:45.146 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:45.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:45.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:45.146 Initialization complete. Launching workers. 00:06:45.146 ======================================================== 00:06:45.146 Latency(us) 00:06:45.146 Device Information : IOPS MiB/s Average min max 00:06:45.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 179.70 0.09 957898.08 498.59 1011557.82 00:06:45.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 146.94 0.07 911192.39 216.27 1012675.24 00:06:45.146 ======================================================== 00:06:45.146 Total : 326.64 0.16 936887.62 216.27 1012675.24 00:06:45.146 00:06:45.146 [2024-10-08 18:14:38.355929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2ba70 (9): Bad file descriptor 00:06:45.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:45.146 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.146 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:45.146 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 256689 00:06:45.146 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 256689 00:06:45.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (256689) - No such process 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 256689 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 256689 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 256689 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.714 [2024-10-08 18:14:38.884565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=257381 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:45.714 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 257381 00:06:45.715 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:45.715 [2024-10-08 18:14:38.965475] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:46.282 18:14:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:46.282 18:14:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 257381 00:06:46.282 18:14:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:46.850 18:14:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:46.850 18:14:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 257381 00:06:46.850 18:14:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:47.109 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:47.109 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 257381 00:06:47.109 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:47.677 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:47.677 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 257381 00:06:47.677 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:48.245 18:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:48.245 18:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 257381 00:06:48.245 18:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:48.814 18:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:48.814 18:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 257381 00:06:48.814 18:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:49.073 Initializing NVMe Controllers 00:06:49.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:49.073 Controller IO queue size 128, less than required. 00:06:49.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:49.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:49.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:49.073 Initialization complete. Launching workers. 00:06:49.073 ======================================================== 00:06:49.073 Latency(us) 00:06:49.073 Device Information : IOPS MiB/s Average min max 00:06:49.073 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001915.26 1000152.34 1005815.48 00:06:49.073 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003687.58 1000157.60 1009248.26 00:06:49.073 ======================================================== 00:06:49.073 Total : 256.00 0.12 1002801.42 1000152.34 1009248.26 00:06:49.073 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 257381 00:06:49.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (257381) - No such process 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 257381 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:49.332 rmmod nvme_tcp 00:06:49.332 rmmod nvme_fabrics 00:06:49.332 rmmod nvme_keyring 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 256480 ']' 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 256480 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 256480 ']' 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 256480 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 256480 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 256480' 00:06:49.332 killing process with pid 256480 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 256480 00:06:49.332 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 256480 00:06:49.591 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:49.591 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:49.591 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:49.591 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:49.591 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:06:49.591 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:49.591 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:06:49.591 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:49.591 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:49.591 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.591 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.591 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.497 18:14:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:51.497 00:06:51.497 real 0m16.796s 00:06:51.497 user 0m30.528s 00:06:51.497 sys 0m5.575s 00:06:51.497 18:14:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.497 18:14:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.497 ************************************ 00:06:51.497 END TEST nvmf_delete_subsystem 00:06:51.497 ************************************ 00:06:51.756 18:14:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:51.756 18:14:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:51.756 18:14:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.756 18:14:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:51.756 ************************************ 00:06:51.756 START TEST nvmf_host_management 00:06:51.756 ************************************ 00:06:51.756 18:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:51.756 * Looking for test storage... 00:06:51.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.756 18:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:51.756 18:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:06:51.756 18:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.756 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:51.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.757 --rc genhtml_branch_coverage=1 00:06:51.757 --rc genhtml_function_coverage=1 00:06:51.757 --rc genhtml_legend=1 00:06:51.757 --rc geninfo_all_blocks=1 00:06:51.757 --rc geninfo_unexecuted_blocks=1 00:06:51.757 00:06:51.757 ' 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:51.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.757 --rc genhtml_branch_coverage=1 00:06:51.757 --rc genhtml_function_coverage=1 00:06:51.757 --rc genhtml_legend=1 00:06:51.757 --rc geninfo_all_blocks=1 00:06:51.757 --rc geninfo_unexecuted_blocks=1 00:06:51.757 00:06:51.757 ' 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:51.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.757 --rc genhtml_branch_coverage=1 00:06:51.757 --rc genhtml_function_coverage=1 00:06:51.757 --rc genhtml_legend=1 00:06:51.757 --rc geninfo_all_blocks=1 00:06:51.757 --rc geninfo_unexecuted_blocks=1 00:06:51.757 00:06:51.757 ' 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:51.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.757 --rc genhtml_branch_coverage=1 00:06:51.757 --rc genhtml_function_coverage=1 00:06:51.757 --rc genhtml_legend=1 00:06:51.757 --rc geninfo_all_blocks=1 00:06:51.757 --rc geninfo_unexecuted_blocks=1 00:06:51.757 00:06:51.757 ' 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:51.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:51.757 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:52.016 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:52.016 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:52.016 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:52.016 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:52.016 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:52.016 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:52.016 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:52.016 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:52.016 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:52.016 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.016 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.016 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.016 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:52.016 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:52.016 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:52.016 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:58.589 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:58.589 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:58.589 Found net devices under 0000:86:00.0: cvl_0_0 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:58.589 Found net devices under 0000:86:00.1: cvl_0_1 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.589 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:58.590 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:58.590 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.590 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.590 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:58.590 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:58.590 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.590 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.590 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.590 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.590 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:58.590 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:58.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:06:58.590 00:06:58.590 --- 10.0.0.2 ping statistics --- 00:06:58.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.590 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:06:58.590 00:06:58.590 --- 10.0.0.1 ping statistics --- 00:06:58.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.590 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=261574 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 261574 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 261574 ']' 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.590 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.590 [2024-10-08 18:14:51.199144] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:06:58.590 [2024-10-08 18:14:51.199197] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.590 [2024-10-08 18:14:51.272996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.590 [2024-10-08 18:14:51.352851] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.590 [2024-10-08 18:14:51.352888] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.590 [2024-10-08 18:14:51.352896] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.590 [2024-10-08 18:14:51.352902] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.590 [2024-10-08 18:14:51.352908] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.590 [2024-10-08 18:14:51.354395] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.590 [2024-10-08 18:14:51.354513] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.590 [2024-10-08 18:14:51.354544] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.590 [2024-10-08 18:14:51.354545] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.850 [2024-10-08 18:14:52.090117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.850 Malloc0 00:06:58.850 [2024-10-08 18:14:52.149533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:58.850 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=261667 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 261667 /var/tmp/bdevperf.sock 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 261667 ']' 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:59.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:59.110 { 00:06:59.110 "params": { 00:06:59.110 "name": "Nvme$subsystem", 00:06:59.110 "trtype": "$TEST_TRANSPORT", 00:06:59.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:59.110 "adrfam": "ipv4", 00:06:59.110 "trsvcid": "$NVMF_PORT", 00:06:59.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:59.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:59.110 "hdgst": ${hdgst:-false}, 00:06:59.110 "ddgst": ${ddgst:-false} 00:06:59.110 }, 00:06:59.110 "method": "bdev_nvme_attach_controller" 00:06:59.110 } 00:06:59.110 EOF 00:06:59.110 )") 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:59.110 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:59.110 "params": { 00:06:59.110 "name": "Nvme0", 00:06:59.110 "trtype": "tcp", 00:06:59.110 "traddr": "10.0.0.2", 00:06:59.110 "adrfam": "ipv4", 00:06:59.110 "trsvcid": "4420", 00:06:59.110 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:59.110 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:59.110 "hdgst": false, 00:06:59.110 "ddgst": false 00:06:59.110 }, 00:06:59.110 "method": "bdev_nvme_attach_controller" 00:06:59.110 }' 00:06:59.110 [2024-10-08 18:14:52.247309] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:06:59.110 [2024-10-08 18:14:52.247357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid261667 ] 00:06:59.110 [2024-10-08 18:14:52.317499] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.110 [2024-10-08 18:14:52.390284] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.679 Running I/O for 10 seconds... 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=794 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 794 -ge 100 ']' 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.940 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.940 [2024-10-08 18:14:53.164871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.940 [2024-10-08 18:14:53.164916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.940 [2024-10-08 18:14:53.164924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.940 [2024-10-08 18:14:53.164931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.940 [2024-10-08 18:14:53.164938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.940 [2024-10-08 18:14:53.164944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.940 [2024-10-08 18:14:53.164950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.940 [2024-10-08 18:14:53.164956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.940 [2024-10-08 18:14:53.164962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.940 [2024-10-08 18:14:53.164974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.940 [2024-10-08 18:14:53.164980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.940 [2024-10-08 18:14:53.164987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.164993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.164999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.165164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24982c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.167847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.941 [2024-10-08 18:14:53.167884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.167895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.941 [2024-10-08 18:14:53.167902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.167910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.941 [2024-10-08 18:14:53.167916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.167924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.941 [2024-10-08 18:14:53.167930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.167937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17085c0 is same with the state(6) to be set 00:06:59.941 [2024-10-08 18:14:53.167985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.167994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.941 [2024-10-08 18:14:53.168351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.941 [2024-10-08 18:14:53.168358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.168942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.942 [2024-10-08 18:14:53.168949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.942 [2024-10-08 18:14:53.169018] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19218e0 was disconnected and freed. reset controller. 00:06:59.942 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.942 [2024-10-08 18:14:53.169925] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:59.943 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:59.943 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.943 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.943 task offset: 109312 on job bdev=Nvme0n1 fails 00:06:59.943 00:06:59.943 Latency(us) 00:06:59.943 [2024-10-08T16:14:53.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.943 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:59.943 Job: Nvme0n1 ended in about 0.45 seconds with error 00:06:59.943 Verification LBA range: start 0x0 length 0x400 00:06:59.943 Nvme0n1 : 0.45 1890.22 118.14 141.66 0.00 30734.48 1458.96 27088.21 00:06:59.943 [2024-10-08T16:14:53.266Z] =================================================================================================================== 00:06:59.943 [2024-10-08T16:14:53.266Z] Total : 1890.22 118.14 141.66 0.00 30734.48 1458.96 27088.21 00:06:59.943 [2024-10-08 18:14:53.172281] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.943 [2024-10-08 18:14:53.172303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17085c0 (9): Bad file descriptor 00:06:59.943 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.943 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:59.943 [2024-10-08 18:14:53.184444] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.881 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 261667 00:07:00.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (261667) - No such process 00:07:00.881 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:00.881 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:00.881 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:00.881 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:00.881 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:00.881 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:00.881 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:00.881 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:00.881 { 00:07:00.881 "params": { 00:07:00.881 "name": "Nvme$subsystem", 00:07:00.881 "trtype": "$TEST_TRANSPORT", 00:07:00.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:00.881 "adrfam": "ipv4", 00:07:00.881 "trsvcid": "$NVMF_PORT", 00:07:00.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:00.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:00.881 "hdgst": ${hdgst:-false}, 00:07:00.881 "ddgst": ${ddgst:-false} 00:07:00.881 }, 00:07:00.881 "method": "bdev_nvme_attach_controller" 00:07:00.881 } 00:07:00.881 EOF 00:07:00.881 )") 00:07:00.881 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:00.881 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:00.881 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:00.881 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:00.881 "params": { 00:07:00.881 "name": "Nvme0", 00:07:00.881 "trtype": "tcp", 00:07:00.881 "traddr": "10.0.0.2", 00:07:00.881 "adrfam": "ipv4", 00:07:00.881 "trsvcid": "4420", 00:07:00.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:00.881 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:00.881 "hdgst": false, 00:07:00.881 "ddgst": false 00:07:00.881 }, 00:07:00.881 "method": "bdev_nvme_attach_controller" 00:07:00.881 }' 00:07:01.140 [2024-10-08 18:14:54.232182] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:01.140 [2024-10-08 18:14:54.232227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid262136 ] 00:07:01.140 [2024-10-08 18:14:54.298035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.140 [2024-10-08 18:14:54.367903] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.399 Running I/O for 1 seconds... 00:07:02.336 2048.00 IOPS, 128.00 MiB/s 00:07:02.336 Latency(us) 00:07:02.336 [2024-10-08T16:14:55.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.336 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:02.336 Verification LBA range: start 0x0 length 0x400 00:07:02.336 Nvme0n1 : 1.03 2058.81 128.68 0.00 0.00 30605.66 4556.31 26963.38 00:07:02.336 [2024-10-08T16:14:55.659Z] =================================================================================================================== 00:07:02.336 [2024-10-08T16:14:55.659Z] Total : 2058.81 128.68 0.00 0.00 30605.66 4556.31 26963.38 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:02.595 rmmod nvme_tcp 00:07:02.595 rmmod nvme_fabrics 00:07:02.595 rmmod nvme_keyring 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 261574 ']' 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 261574 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 261574 ']' 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 261574 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 261574 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 261574' 00:07:02.595 killing process with pid 261574 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 261574 00:07:02.595 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 261574 00:07:02.854 [2024-10-08 18:14:56.088878] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:02.854 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:02.854 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:02.854 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:02.854 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:02.854 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:07:02.854 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:02.854 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:07:02.854 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:02.854 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:02.854 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.854 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:02.854 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:05.392 00:07:05.392 real 0m13.316s 00:07:05.392 user 0m23.313s 00:07:05.392 sys 0m5.734s 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:05.392 ************************************ 00:07:05.392 END TEST nvmf_host_management 00:07:05.392 ************************************ 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:05.392 ************************************ 00:07:05.392 START TEST nvmf_lvol 00:07:05.392 ************************************ 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:05.392 * Looking for test storage... 00:07:05.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:05.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.392 --rc genhtml_branch_coverage=1 00:07:05.392 --rc genhtml_function_coverage=1 00:07:05.392 --rc genhtml_legend=1 00:07:05.392 --rc geninfo_all_blocks=1 00:07:05.392 --rc geninfo_unexecuted_blocks=1 00:07:05.392 00:07:05.392 ' 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:05.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.392 --rc genhtml_branch_coverage=1 00:07:05.392 --rc genhtml_function_coverage=1 00:07:05.392 --rc genhtml_legend=1 00:07:05.392 --rc geninfo_all_blocks=1 00:07:05.392 --rc geninfo_unexecuted_blocks=1 00:07:05.392 00:07:05.392 ' 00:07:05.392 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:05.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.392 --rc genhtml_branch_coverage=1 00:07:05.392 --rc genhtml_function_coverage=1 00:07:05.392 --rc genhtml_legend=1 00:07:05.393 --rc geninfo_all_blocks=1 00:07:05.393 --rc geninfo_unexecuted_blocks=1 00:07:05.393 00:07:05.393 ' 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:05.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.393 --rc genhtml_branch_coverage=1 00:07:05.393 --rc genhtml_function_coverage=1 00:07:05.393 --rc genhtml_legend=1 00:07:05.393 --rc geninfo_all_blocks=1 00:07:05.393 --rc geninfo_unexecuted_blocks=1 00:07:05.393 00:07:05.393 ' 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:05.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:05.393 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:11.964 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:11.965 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:11.965 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:11.965 Found net devices under 0000:86:00.0: cvl_0_0 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:11.965 Found net devices under 0000:86:00.1: cvl_0_1 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:11.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:07:11.965 00:07:11.965 --- 10.0.0.2 ping statistics --- 00:07:11.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.965 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:11.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:07:11.965 00:07:11.965 --- 10.0.0.1 ping statistics --- 00:07:11.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.965 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=266037 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 266037 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 266037 ']' 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.965 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:11.965 [2024-10-08 18:15:04.539425] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:11.965 [2024-10-08 18:15:04.539477] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.965 [2024-10-08 18:15:04.610241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.965 [2024-10-08 18:15:04.690406] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.965 [2024-10-08 18:15:04.690442] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.966 [2024-10-08 18:15:04.690449] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.966 [2024-10-08 18:15:04.690455] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.966 [2024-10-08 18:15:04.690464] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.966 [2024-10-08 18:15:04.691436] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.966 [2024-10-08 18:15:04.691465] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.966 [2024-10-08 18:15:04.691465] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.225 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.225 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:12.225 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:12.225 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:12.225 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:12.225 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.225 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:12.484 [2024-10-08 18:15:05.590761] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.484 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:12.743 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:12.743 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:12.743 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:12.743 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:13.002 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:13.260 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7d2e9d07-c0d8-4496-a347-ec39a28f041e 00:07:13.260 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7d2e9d07-c0d8-4496-a347-ec39a28f041e lvol 20 00:07:13.519 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=fad366d2-f421-4885-9b72-42711d035527 00:07:13.519 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:13.778 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fad366d2-f421-4885-9b72-42711d035527 00:07:13.778 18:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:14.037 [2024-10-08 18:15:07.230851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.037 18:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:14.295 18:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=266537 00:07:14.295 18:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:14.295 18:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:15.308 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot fad366d2-f421-4885-9b72-42711d035527 MY_SNAPSHOT 00:07:15.604 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3eb775fd-56cf-4dff-ae8c-3d291619d0a8 00:07:15.604 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize fad366d2-f421-4885-9b72-42711d035527 30 00:07:15.862 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3eb775fd-56cf-4dff-ae8c-3d291619d0a8 MY_CLONE 00:07:16.120 18:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=cd4f217c-eb26-4f05-b8af-972556e575d2 00:07:16.120 18:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate cd4f217c-eb26-4f05-b8af-972556e575d2 00:07:16.688 18:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 266537 00:07:24.806 Initializing NVMe Controllers 00:07:24.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:24.806 Controller IO queue size 128, less than required. 00:07:24.806 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:24.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:24.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:24.806 Initialization complete. Launching workers. 00:07:24.806 ======================================================== 00:07:24.806 Latency(us) 00:07:24.806 Device Information : IOPS MiB/s Average min max 00:07:24.806 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12435.80 48.58 10299.42 458.40 122980.81 00:07:24.806 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12294.40 48.02 10412.69 3104.94 57868.79 00:07:24.806 ======================================================== 00:07:24.806 Total : 24730.20 96.60 10355.73 458.40 122980.81 00:07:24.806 00:07:24.806 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:24.806 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fad366d2-f421-4885-9b72-42711d035527 00:07:25.065 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7d2e9d07-c0d8-4496-a347-ec39a28f041e 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:25.323 rmmod nvme_tcp 00:07:25.323 rmmod nvme_fabrics 00:07:25.323 rmmod nvme_keyring 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 266037 ']' 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 266037 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 266037 ']' 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 266037 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 266037 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 266037' 00:07:25.323 killing process with pid 266037 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 266037 00:07:25.323 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 266037 00:07:25.581 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:25.581 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:25.581 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:25.581 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:25.581 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:07:25.581 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:25.581 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:07:25.581 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:25.581 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:25.581 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.581 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.581 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.117 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:28.117 00:07:28.117 real 0m22.558s 00:07:28.117 user 1m5.006s 00:07:28.117 sys 0m7.495s 00:07:28.117 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.117 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:28.117 ************************************ 00:07:28.117 END TEST nvmf_lvol 00:07:28.117 ************************************ 00:07:28.117 18:15:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:28.117 18:15:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:28.117 18:15:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.117 18:15:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:28.117 ************************************ 00:07:28.117 START TEST nvmf_lvs_grow 00:07:28.117 ************************************ 00:07:28.117 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:28.117 * Looking for test storage... 00:07:28.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.117 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:28.117 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:07:28.117 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.117 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:28.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.118 --rc genhtml_branch_coverage=1 00:07:28.118 --rc genhtml_function_coverage=1 00:07:28.118 --rc genhtml_legend=1 00:07:28.118 --rc geninfo_all_blocks=1 00:07:28.118 --rc geninfo_unexecuted_blocks=1 00:07:28.118 00:07:28.118 ' 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:28.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.118 --rc genhtml_branch_coverage=1 00:07:28.118 --rc genhtml_function_coverage=1 00:07:28.118 --rc genhtml_legend=1 00:07:28.118 --rc geninfo_all_blocks=1 00:07:28.118 --rc geninfo_unexecuted_blocks=1 00:07:28.118 00:07:28.118 ' 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:28.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.118 --rc genhtml_branch_coverage=1 00:07:28.118 --rc genhtml_function_coverage=1 00:07:28.118 --rc genhtml_legend=1 00:07:28.118 --rc geninfo_all_blocks=1 00:07:28.118 --rc geninfo_unexecuted_blocks=1 00:07:28.118 00:07:28.118 ' 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:28.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.118 --rc genhtml_branch_coverage=1 00:07:28.118 --rc genhtml_function_coverage=1 00:07:28.118 --rc genhtml_legend=1 00:07:28.118 --rc geninfo_all_blocks=1 00:07:28.118 --rc geninfo_unexecuted_blocks=1 00:07:28.118 00:07:28.118 ' 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:28.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:28.118 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:34.690 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:34.690 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:34.690 Found net devices under 0000:86:00.0: cvl_0_0 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:34.690 Found net devices under 0000:86:00.1: cvl_0_1 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.690 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:34.691 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:34.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:07:34.691 00:07:34.691 --- 10.0.0.2 ping statistics --- 00:07:34.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.691 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:07:34.691 00:07:34.691 --- 10.0.0.1 ping statistics --- 00:07:34.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.691 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=272389 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 272389 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 272389 ']' 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.691 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:34.691 [2024-10-08 18:15:27.199321] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:34.691 [2024-10-08 18:15:27.199367] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.691 [2024-10-08 18:15:27.270352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.691 [2024-10-08 18:15:27.347584] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.691 [2024-10-08 18:15:27.347620] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.691 [2024-10-08 18:15:27.347627] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.691 [2024-10-08 18:15:27.347633] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.691 [2024-10-08 18:15:27.347638] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.691 [2024-10-08 18:15:27.348198] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.950 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.950 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:34.950 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:34.950 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:34.950 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:34.950 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.950 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:34.950 [2024-10-08 18:15:28.220854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.950 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:34.950 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.950 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.950 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:35.207 ************************************ 00:07:35.207 START TEST lvs_grow_clean 00:07:35.207 ************************************ 00:07:35.207 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:35.207 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:35.207 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:35.207 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:35.207 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:35.207 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:35.207 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:35.207 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:35.207 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:35.207 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:35.466 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:35.466 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:35.466 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d450f9c7-f29e-436e-8d74-e1b1e1775100 00:07:35.466 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d450f9c7-f29e-436e-8d74-e1b1e1775100 00:07:35.466 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:35.726 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:35.726 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:35.726 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d450f9c7-f29e-436e-8d74-e1b1e1775100 lvol 150 00:07:35.984 18:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d619703c-7778-4300-b934-6fd4a5852a3d 00:07:35.984 18:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:35.984 18:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:35.984 [2024-10-08 18:15:29.288274] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:35.984 [2024-10-08 18:15:29.288322] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:35.984 true 00:07:35.985 18:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d450f9c7-f29e-436e-8d74-e1b1e1775100 00:07:35.985 18:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:36.243 18:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:36.243 18:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:36.502 18:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d619703c-7778-4300-b934-6fd4a5852a3d 00:07:36.762 18:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:36.762 [2024-10-08 18:15:30.002440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.762 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:37.021 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=273033 00:07:37.021 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:37.021 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 273033 /var/tmp/bdevperf.sock 00:07:37.021 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 273033 ']' 00:07:37.021 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:37.021 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.021 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:37.021 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:37.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:37.021 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.021 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:37.021 [2024-10-08 18:15:30.257651] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:37.021 [2024-10-08 18:15:30.257701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid273033 ] 00:07:37.021 [2024-10-08 18:15:30.327313] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.280 [2024-10-08 18:15:30.399135] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.848 18:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.848 18:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:37.848 18:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:38.416 Nvme0n1 00:07:38.416 18:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:38.416 [ 00:07:38.416 { 00:07:38.416 "name": "Nvme0n1", 00:07:38.416 "aliases": [ 00:07:38.416 "d619703c-7778-4300-b934-6fd4a5852a3d" 00:07:38.416 ], 00:07:38.416 "product_name": "NVMe disk", 00:07:38.416 "block_size": 4096, 00:07:38.416 "num_blocks": 38912, 00:07:38.416 "uuid": "d619703c-7778-4300-b934-6fd4a5852a3d", 00:07:38.416 "numa_id": 1, 00:07:38.416 "assigned_rate_limits": { 00:07:38.416 "rw_ios_per_sec": 0, 00:07:38.416 "rw_mbytes_per_sec": 0, 00:07:38.416 "r_mbytes_per_sec": 0, 00:07:38.416 "w_mbytes_per_sec": 0 00:07:38.416 }, 00:07:38.416 "claimed": false, 00:07:38.416 "zoned": false, 00:07:38.416 "supported_io_types": { 00:07:38.416 "read": true, 00:07:38.416 "write": true, 00:07:38.417 "unmap": true, 00:07:38.417 "flush": true, 00:07:38.417 "reset": true, 00:07:38.417 "nvme_admin": true, 00:07:38.417 "nvme_io": true, 00:07:38.417 "nvme_io_md": false, 00:07:38.417 "write_zeroes": true, 00:07:38.417 "zcopy": false, 00:07:38.417 "get_zone_info": false, 00:07:38.417 "zone_management": false, 00:07:38.417 "zone_append": false, 00:07:38.417 "compare": true, 00:07:38.417 "compare_and_write": true, 00:07:38.417 "abort": true, 00:07:38.417 "seek_hole": false, 00:07:38.417 "seek_data": false, 00:07:38.417 "copy": true, 00:07:38.417 "nvme_iov_md": false 00:07:38.417 }, 00:07:38.417 "memory_domains": [ 00:07:38.417 { 00:07:38.417 "dma_device_id": "system", 00:07:38.417 "dma_device_type": 1 00:07:38.417 } 00:07:38.417 ], 00:07:38.417 "driver_specific": { 00:07:38.417 "nvme": [ 00:07:38.417 { 00:07:38.417 "trid": { 00:07:38.417 "trtype": "TCP", 00:07:38.417 "adrfam": "IPv4", 00:07:38.417 "traddr": "10.0.0.2", 00:07:38.417 "trsvcid": "4420", 00:07:38.417 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:38.417 }, 00:07:38.417 "ctrlr_data": { 00:07:38.417 "cntlid": 1, 00:07:38.417 "vendor_id": "0x8086", 00:07:38.417 "model_number": "SPDK bdev Controller", 00:07:38.417 "serial_number": "SPDK0", 00:07:38.417 "firmware_revision": "25.01", 00:07:38.417 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:38.417 "oacs": { 00:07:38.417 "security": 0, 00:07:38.417 "format": 0, 00:07:38.417 "firmware": 0, 00:07:38.417 "ns_manage": 0 00:07:38.417 }, 00:07:38.417 "multi_ctrlr": true, 00:07:38.417 "ana_reporting": false 00:07:38.417 }, 00:07:38.417 "vs": { 00:07:38.417 "nvme_version": "1.3" 00:07:38.417 }, 00:07:38.417 "ns_data": { 00:07:38.417 "id": 1, 00:07:38.417 "can_share": true 00:07:38.417 } 00:07:38.417 } 00:07:38.417 ], 00:07:38.417 "mp_policy": "active_passive" 00:07:38.417 } 00:07:38.417 } 00:07:38.417 ] 00:07:38.417 18:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=273265 00:07:38.417 18:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:38.417 18:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:38.417 Running I/O for 10 seconds... 00:07:39.796 Latency(us) 00:07:39.796 [2024-10-08T16:15:33.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.796 Nvme0n1 : 1.00 23281.00 90.94 0.00 0.00 0.00 0.00 0.00 00:07:39.796 [2024-10-08T16:15:33.119Z] =================================================================================================================== 00:07:39.796 [2024-10-08T16:15:33.119Z] Total : 23281.00 90.94 0.00 0.00 0.00 0.00 0.00 00:07:39.796 00:07:40.364 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d450f9c7-f29e-436e-8d74-e1b1e1775100 00:07:40.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.654 Nvme0n1 : 2.00 23431.50 91.53 0.00 0.00 0.00 0.00 0.00 00:07:40.654 [2024-10-08T16:15:33.977Z] =================================================================================================================== 00:07:40.654 [2024-10-08T16:15:33.977Z] Total : 23431.50 91.53 0.00 0.00 0.00 0.00 0.00 00:07:40.654 00:07:40.654 true 00:07:40.654 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d450f9c7-f29e-436e-8d74-e1b1e1775100 00:07:40.654 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:40.914 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:40.914 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:40.914 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 273265 00:07:41.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.481 Nvme0n1 : 3.00 23489.00 91.75 0.00 0.00 0.00 0.00 0.00 00:07:41.481 [2024-10-08T16:15:34.804Z] =================================================================================================================== 00:07:41.481 [2024-10-08T16:15:34.804Z] Total : 23489.00 91.75 0.00 0.00 0.00 0.00 0.00 00:07:41.481 00:07:42.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.857 Nvme0n1 : 4.00 23549.50 91.99 0.00 0.00 0.00 0.00 0.00 00:07:42.857 [2024-10-08T16:15:36.180Z] =================================================================================================================== 00:07:42.857 [2024-10-08T16:15:36.180Z] Total : 23549.50 91.99 0.00 0.00 0.00 0.00 0.00 00:07:42.857 00:07:43.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.424 Nvme0n1 : 5.00 23603.60 92.20 0.00 0.00 0.00 0.00 0.00 00:07:43.424 [2024-10-08T16:15:36.747Z] =================================================================================================================== 00:07:43.424 [2024-10-08T16:15:36.747Z] Total : 23603.60 92.20 0.00 0.00 0.00 0.00 0.00 00:07:43.424 00:07:44.803 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.803 Nvme0n1 : 6.00 23629.67 92.30 0.00 0.00 0.00 0.00 0.00 00:07:44.803 [2024-10-08T16:15:38.126Z] =================================================================================================================== 00:07:44.803 [2024-10-08T16:15:38.126Z] Total : 23629.67 92.30 0.00 0.00 0.00 0.00 0.00 00:07:44.803 00:07:45.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.739 Nvme0n1 : 7.00 23658.43 92.42 0.00 0.00 0.00 0.00 0.00 00:07:45.739 [2024-10-08T16:15:39.062Z] =================================================================================================================== 00:07:45.739 [2024-10-08T16:15:39.062Z] Total : 23658.43 92.42 0.00 0.00 0.00 0.00 0.00 00:07:45.739 00:07:46.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.676 Nvme0n1 : 8.00 23675.25 92.48 0.00 0.00 0.00 0.00 0.00 00:07:46.676 [2024-10-08T16:15:39.999Z] =================================================================================================================== 00:07:46.676 [2024-10-08T16:15:39.999Z] Total : 23675.25 92.48 0.00 0.00 0.00 0.00 0.00 00:07:46.676 00:07:47.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.613 Nvme0n1 : 9.00 23658.44 92.42 0.00 0.00 0.00 0.00 0.00 00:07:47.613 [2024-10-08T16:15:40.936Z] =================================================================================================================== 00:07:47.613 [2024-10-08T16:15:40.936Z] Total : 23658.44 92.42 0.00 0.00 0.00 0.00 0.00 00:07:47.613 00:07:48.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.550 Nvme0n1 : 10.00 23680.50 92.50 0.00 0.00 0.00 0.00 0.00 00:07:48.550 [2024-10-08T16:15:41.873Z] =================================================================================================================== 00:07:48.550 [2024-10-08T16:15:41.873Z] Total : 23680.50 92.50 0.00 0.00 0.00 0.00 0.00 00:07:48.550 00:07:48.550 00:07:48.550 Latency(us) 00:07:48.550 [2024-10-08T16:15:41.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.550 Nvme0n1 : 10.00 23674.23 92.48 0.00 0.00 5403.05 2231.34 10673.01 00:07:48.550 [2024-10-08T16:15:41.873Z] =================================================================================================================== 00:07:48.550 [2024-10-08T16:15:41.873Z] Total : 23674.23 92.48 0.00 0.00 5403.05 2231.34 10673.01 00:07:48.550 { 00:07:48.550 "results": [ 00:07:48.550 { 00:07:48.550 "job": "Nvme0n1", 00:07:48.550 "core_mask": "0x2", 00:07:48.550 "workload": "randwrite", 00:07:48.550 "status": "finished", 00:07:48.550 "queue_depth": 128, 00:07:48.550 "io_size": 4096, 00:07:48.550 "runtime": 10.002692, 00:07:48.550 "iops": 23674.226898119025, 00:07:48.550 "mibps": 92.47744882077744, 00:07:48.550 "io_failed": 0, 00:07:48.550 "io_timeout": 0, 00:07:48.550 "avg_latency_us": 5403.0541531162935, 00:07:48.550 "min_latency_us": 2231.344761904762, 00:07:48.550 "max_latency_us": 10673.005714285715 00:07:48.550 } 00:07:48.550 ], 00:07:48.550 "core_count": 1 00:07:48.550 } 00:07:48.550 18:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 273033 00:07:48.550 18:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 273033 ']' 00:07:48.550 18:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 273033 00:07:48.550 18:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:48.550 18:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.550 18:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 273033 00:07:48.550 18:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:48.550 18:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:48.550 18:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 273033' 00:07:48.550 killing process with pid 273033 00:07:48.550 18:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 273033 00:07:48.550 Received shutdown signal, test time was about 10.000000 seconds 00:07:48.550 00:07:48.550 Latency(us) 00:07:48.550 [2024-10-08T16:15:41.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.550 [2024-10-08T16:15:41.873Z] =================================================================================================================== 00:07:48.550 [2024-10-08T16:15:41.873Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:48.550 18:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 273033 00:07:48.809 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:49.068 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:49.327 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d450f9c7-f29e-436e-8d74-e1b1e1775100 00:07:49.327 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:49.327 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:49.327 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:49.327 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:49.586 [2024-10-08 18:15:42.806952] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:49.586 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d450f9c7-f29e-436e-8d74-e1b1e1775100 00:07:49.586 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:49.586 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d450f9c7-f29e-436e-8d74-e1b1e1775100 00:07:49.586 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.586 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.586 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.586 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.586 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.586 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.586 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.586 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:49.586 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d450f9c7-f29e-436e-8d74-e1b1e1775100 00:07:49.845 request: 00:07:49.845 { 00:07:49.845 "uuid": "d450f9c7-f29e-436e-8d74-e1b1e1775100", 00:07:49.845 "method": "bdev_lvol_get_lvstores", 00:07:49.845 "req_id": 1 00:07:49.845 } 00:07:49.845 Got JSON-RPC error response 00:07:49.845 response: 00:07:49.845 { 00:07:49.845 "code": -19, 00:07:49.845 "message": "No such device" 00:07:49.845 } 00:07:49.845 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:49.845 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.845 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:49.845 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.845 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:50.104 aio_bdev 00:07:50.104 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d619703c-7778-4300-b934-6fd4a5852a3d 00:07:50.105 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=d619703c-7778-4300-b934-6fd4a5852a3d 00:07:50.105 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:50.105 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:50.105 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:50.105 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:50.105 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:50.105 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d619703c-7778-4300-b934-6fd4a5852a3d -t 2000 00:07:50.363 [ 00:07:50.364 { 00:07:50.364 "name": "d619703c-7778-4300-b934-6fd4a5852a3d", 00:07:50.364 "aliases": [ 00:07:50.364 "lvs/lvol" 00:07:50.364 ], 00:07:50.364 "product_name": "Logical Volume", 00:07:50.364 "block_size": 4096, 00:07:50.364 "num_blocks": 38912, 00:07:50.364 "uuid": "d619703c-7778-4300-b934-6fd4a5852a3d", 00:07:50.364 "assigned_rate_limits": { 00:07:50.364 "rw_ios_per_sec": 0, 00:07:50.364 "rw_mbytes_per_sec": 0, 00:07:50.364 "r_mbytes_per_sec": 0, 00:07:50.364 "w_mbytes_per_sec": 0 00:07:50.364 }, 00:07:50.364 "claimed": false, 00:07:50.364 "zoned": false, 00:07:50.364 "supported_io_types": { 00:07:50.364 "read": true, 00:07:50.364 "write": true, 00:07:50.364 "unmap": true, 00:07:50.364 "flush": false, 00:07:50.364 "reset": true, 00:07:50.364 "nvme_admin": false, 00:07:50.364 "nvme_io": false, 00:07:50.364 "nvme_io_md": false, 00:07:50.364 "write_zeroes": true, 00:07:50.364 "zcopy": false, 00:07:50.364 "get_zone_info": false, 00:07:50.364 "zone_management": false, 00:07:50.364 "zone_append": false, 00:07:50.364 "compare": false, 00:07:50.364 "compare_and_write": false, 00:07:50.364 "abort": false, 00:07:50.364 "seek_hole": true, 00:07:50.364 "seek_data": true, 00:07:50.364 "copy": false, 00:07:50.364 "nvme_iov_md": false 00:07:50.364 }, 00:07:50.364 "driver_specific": { 00:07:50.364 "lvol": { 00:07:50.364 "lvol_store_uuid": "d450f9c7-f29e-436e-8d74-e1b1e1775100", 00:07:50.364 "base_bdev": "aio_bdev", 00:07:50.364 "thin_provision": false, 00:07:50.364 "num_allocated_clusters": 38, 00:07:50.364 "snapshot": false, 00:07:50.364 "clone": false, 00:07:50.364 "esnap_clone": false 00:07:50.364 } 00:07:50.364 } 00:07:50.364 } 00:07:50.364 ] 00:07:50.364 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:50.364 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:50.364 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d450f9c7-f29e-436e-8d74-e1b1e1775100 00:07:50.623 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:50.623 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d450f9c7-f29e-436e-8d74-e1b1e1775100 00:07:50.623 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:50.882 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:50.882 18:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d619703c-7778-4300-b934-6fd4a5852a3d 00:07:50.882 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d450f9c7-f29e-436e-8d74-e1b1e1775100 00:07:51.141 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:51.400 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:51.400 00:07:51.400 real 0m16.266s 00:07:51.400 user 0m15.877s 00:07:51.400 sys 0m1.534s 00:07:51.400 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.400 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:51.400 ************************************ 00:07:51.400 END TEST lvs_grow_clean 00:07:51.400 ************************************ 00:07:51.400 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:51.400 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:51.400 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.400 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:51.400 ************************************ 00:07:51.400 START TEST lvs_grow_dirty 00:07:51.400 ************************************ 00:07:51.400 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:51.400 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:51.400 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:51.400 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:51.401 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:51.401 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:51.401 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:51.401 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:51.401 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:51.401 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:51.660 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:51.660 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:51.919 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=218384bf-3624-48e4-b967-181549c33242 00:07:51.919 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 218384bf-3624-48e4-b967-181549c33242 00:07:51.919 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:51.919 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:51.919 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:51.919 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 218384bf-3624-48e4-b967-181549c33242 lvol 150 00:07:52.178 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c23cc751-750b-4a26-bcd5-e4bacb1ccc0b 00:07:52.178 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:52.178 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:52.437 [2024-10-08 18:15:45.561222] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:52.437 [2024-10-08 18:15:45.561267] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:52.437 true 00:07:52.437 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 218384bf-3624-48e4-b967-181549c33242 00:07:52.437 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:52.697 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:52.697 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:52.697 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c23cc751-750b-4a26-bcd5-e4bacb1ccc0b 00:07:52.955 18:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:53.214 [2024-10-08 18:15:46.299418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.214 18:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.214 18:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=275740 00:07:53.214 18:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:53.214 18:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:53.214 18:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 275740 /var/tmp/bdevperf.sock 00:07:53.214 18:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 275740 ']' 00:07:53.214 18:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:53.214 18:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.214 18:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:53.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:53.214 18:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.214 18:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:53.473 [2024-10-08 18:15:46.547144] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:07:53.473 [2024-10-08 18:15:46.547193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275740 ] 00:07:53.473 [2024-10-08 18:15:46.613296] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.473 [2024-10-08 18:15:46.690728] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.411 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.411 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:54.411 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:54.411 Nvme0n1 00:07:54.411 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:54.670 [ 00:07:54.670 { 00:07:54.670 "name": "Nvme0n1", 00:07:54.670 "aliases": [ 00:07:54.670 "c23cc751-750b-4a26-bcd5-e4bacb1ccc0b" 00:07:54.670 ], 00:07:54.670 "product_name": "NVMe disk", 00:07:54.670 "block_size": 4096, 00:07:54.670 "num_blocks": 38912, 00:07:54.670 "uuid": "c23cc751-750b-4a26-bcd5-e4bacb1ccc0b", 00:07:54.670 "numa_id": 1, 00:07:54.670 "assigned_rate_limits": { 00:07:54.670 "rw_ios_per_sec": 0, 00:07:54.670 "rw_mbytes_per_sec": 0, 00:07:54.670 "r_mbytes_per_sec": 0, 00:07:54.670 "w_mbytes_per_sec": 0 00:07:54.670 }, 00:07:54.670 "claimed": false, 00:07:54.670 "zoned": false, 00:07:54.670 "supported_io_types": { 00:07:54.670 "read": true, 00:07:54.670 "write": true, 00:07:54.670 "unmap": true, 00:07:54.670 "flush": true, 00:07:54.670 "reset": true, 00:07:54.670 "nvme_admin": true, 00:07:54.670 "nvme_io": true, 00:07:54.670 "nvme_io_md": false, 00:07:54.670 "write_zeroes": true, 00:07:54.670 "zcopy": false, 00:07:54.670 "get_zone_info": false, 00:07:54.670 "zone_management": false, 00:07:54.670 "zone_append": false, 00:07:54.670 "compare": true, 00:07:54.670 "compare_and_write": true, 00:07:54.670 "abort": true, 00:07:54.670 "seek_hole": false, 00:07:54.670 "seek_data": false, 00:07:54.670 "copy": true, 00:07:54.670 "nvme_iov_md": false 00:07:54.670 }, 00:07:54.670 "memory_domains": [ 00:07:54.670 { 00:07:54.670 "dma_device_id": "system", 00:07:54.670 "dma_device_type": 1 00:07:54.670 } 00:07:54.670 ], 00:07:54.670 "driver_specific": { 00:07:54.670 "nvme": [ 00:07:54.670 { 00:07:54.670 "trid": { 00:07:54.670 "trtype": "TCP", 00:07:54.670 "adrfam": "IPv4", 00:07:54.670 "traddr": "10.0.0.2", 00:07:54.670 "trsvcid": "4420", 00:07:54.670 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:54.670 }, 00:07:54.670 "ctrlr_data": { 00:07:54.670 "cntlid": 1, 00:07:54.670 "vendor_id": "0x8086", 00:07:54.670 "model_number": "SPDK bdev Controller", 00:07:54.670 "serial_number": "SPDK0", 00:07:54.670 "firmware_revision": "25.01", 00:07:54.670 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:54.670 "oacs": { 00:07:54.670 "security": 0, 00:07:54.670 "format": 0, 00:07:54.670 "firmware": 0, 00:07:54.670 "ns_manage": 0 00:07:54.670 }, 00:07:54.670 "multi_ctrlr": true, 00:07:54.670 "ana_reporting": false 00:07:54.670 }, 00:07:54.670 "vs": { 00:07:54.670 "nvme_version": "1.3" 00:07:54.670 }, 00:07:54.670 "ns_data": { 00:07:54.670 "id": 1, 00:07:54.670 "can_share": true 00:07:54.670 } 00:07:54.670 } 00:07:54.670 ], 00:07:54.670 "mp_policy": "active_passive" 00:07:54.670 } 00:07:54.670 } 00:07:54.670 ] 00:07:54.670 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=275949 00:07:54.670 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:54.670 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:54.670 Running I/O for 10 seconds... 00:07:56.048 Latency(us) 00:07:56.048 [2024-10-08T16:15:49.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.048 Nvme0n1 : 1.00 23372.00 91.30 0.00 0.00 0.00 0.00 0.00 00:07:56.048 [2024-10-08T16:15:49.371Z] =================================================================================================================== 00:07:56.048 [2024-10-08T16:15:49.371Z] Total : 23372.00 91.30 0.00 0.00 0.00 0.00 0.00 00:07:56.048 00:07:56.616 18:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 218384bf-3624-48e4-b967-181549c33242 00:07:56.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.616 Nvme0n1 : 2.00 23500.00 91.80 0.00 0.00 0.00 0.00 0.00 00:07:56.616 [2024-10-08T16:15:49.939Z] =================================================================================================================== 00:07:56.616 [2024-10-08T16:15:49.939Z] Total : 23500.00 91.80 0.00 0.00 0.00 0.00 0.00 00:07:56.616 00:07:56.875 true 00:07:56.876 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 218384bf-3624-48e4-b967-181549c33242 00:07:56.876 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:57.134 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:57.134 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:57.134 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 275949 00:07:57.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.700 Nvme0n1 : 3.00 23400.33 91.41 0.00 0.00 0.00 0.00 0.00 00:07:57.700 [2024-10-08T16:15:51.023Z] =================================================================================================================== 00:07:57.700 [2024-10-08T16:15:51.023Z] Total : 23400.33 91.41 0.00 0.00 0.00 0.00 0.00 00:07:57.700 00:07:58.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.637 Nvme0n1 : 4.00 23491.50 91.76 0.00 0.00 0.00 0.00 0.00 00:07:58.637 [2024-10-08T16:15:51.960Z] =================================================================================================================== 00:07:58.637 [2024-10-08T16:15:51.960Z] Total : 23491.50 91.76 0.00 0.00 0.00 0.00 0.00 00:07:58.637 00:08:00.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.015 Nvme0n1 : 5.00 23556.80 92.02 0.00 0.00 0.00 0.00 0.00 00:08:00.015 [2024-10-08T16:15:53.338Z] =================================================================================================================== 00:08:00.015 [2024-10-08T16:15:53.338Z] Total : 23556.80 92.02 0.00 0.00 0.00 0.00 0.00 00:08:00.015 00:08:00.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.951 Nvme0n1 : 6.00 23600.33 92.19 0.00 0.00 0.00 0.00 0.00 00:08:00.951 [2024-10-08T16:15:54.274Z] =================================================================================================================== 00:08:00.951 [2024-10-08T16:15:54.274Z] Total : 23600.33 92.19 0.00 0.00 0.00 0.00 0.00 00:08:00.951 00:08:01.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.891 Nvme0n1 : 7.00 23631.14 92.31 0.00 0.00 0.00 0.00 0.00 00:08:01.891 [2024-10-08T16:15:55.214Z] =================================================================================================================== 00:08:01.891 [2024-10-08T16:15:55.214Z] Total : 23631.14 92.31 0.00 0.00 0.00 0.00 0.00 00:08:01.891 00:08:02.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.827 Nvme0n1 : 8.00 23651.50 92.39 0.00 0.00 0.00 0.00 0.00 00:08:02.827 [2024-10-08T16:15:56.150Z] =================================================================================================================== 00:08:02.827 [2024-10-08T16:15:56.150Z] Total : 23651.50 92.39 0.00 0.00 0.00 0.00 0.00 00:08:02.827 00:08:03.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.763 Nvme0n1 : 9.00 23678.56 92.49 0.00 0.00 0.00 0.00 0.00 00:08:03.763 [2024-10-08T16:15:57.086Z] =================================================================================================================== 00:08:03.763 [2024-10-08T16:15:57.086Z] Total : 23678.56 92.49 0.00 0.00 0.00 0.00 0.00 00:08:03.763 00:08:04.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.698 Nvme0n1 : 10.00 23688.70 92.53 0.00 0.00 0.00 0.00 0.00 00:08:04.698 [2024-10-08T16:15:58.021Z] =================================================================================================================== 00:08:04.698 [2024-10-08T16:15:58.021Z] Total : 23688.70 92.53 0.00 0.00 0.00 0.00 0.00 00:08:04.698 00:08:04.698 00:08:04.698 Latency(us) 00:08:04.698 [2024-10-08T16:15:58.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.698 Nvme0n1 : 10.00 23694.07 92.55 0.00 0.00 5399.29 3105.16 9986.44 00:08:04.698 [2024-10-08T16:15:58.021Z] =================================================================================================================== 00:08:04.698 [2024-10-08T16:15:58.021Z] Total : 23694.07 92.55 0.00 0.00 5399.29 3105.16 9986.44 00:08:04.698 { 00:08:04.698 "results": [ 00:08:04.698 { 00:08:04.698 "job": "Nvme0n1", 00:08:04.698 "core_mask": "0x2", 00:08:04.698 "workload": "randwrite", 00:08:04.698 "status": "finished", 00:08:04.698 "queue_depth": 128, 00:08:04.698 "io_size": 4096, 00:08:04.698 "runtime": 10.003134, 00:08:04.698 "iops": 23694.074277121552, 00:08:04.698 "mibps": 92.55497764500606, 00:08:04.698 "io_failed": 0, 00:08:04.698 "io_timeout": 0, 00:08:04.698 "avg_latency_us": 5399.29197045395, 00:08:04.698 "min_latency_us": 3105.158095238095, 00:08:04.698 "max_latency_us": 9986.438095238096 00:08:04.698 } 00:08:04.698 ], 00:08:04.698 "core_count": 1 00:08:04.698 } 00:08:04.698 18:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 275740 00:08:04.698 18:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 275740 ']' 00:08:04.698 18:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 275740 00:08:04.698 18:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:04.698 18:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.698 18:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 275740 00:08:04.957 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:04.957 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:04.957 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 275740' 00:08:04.957 killing process with pid 275740 00:08:04.957 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 275740 00:08:04.957 Received shutdown signal, test time was about 10.000000 seconds 00:08:04.957 00:08:04.957 Latency(us) 00:08:04.957 [2024-10-08T16:15:58.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.957 [2024-10-08T16:15:58.280Z] =================================================================================================================== 00:08:04.957 [2024-10-08T16:15:58.280Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:04.957 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 275740 00:08:04.957 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.215 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:05.473 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:05.473 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 218384bf-3624-48e4-b967-181549c33242 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 272389 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 272389 00:08:05.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 272389 Killed "${NVMF_APP[@]}" "$@" 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=277759 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 277759 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 277759 ']' 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.732 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:05.732 [2024-10-08 18:15:58.906943] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:05.732 [2024-10-08 18:15:58.906992] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.732 [2024-10-08 18:15:58.977640] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.991 [2024-10-08 18:15:59.054898] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.991 [2024-10-08 18:15:59.054936] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.991 [2024-10-08 18:15:59.054943] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.991 [2024-10-08 18:15:59.054949] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.991 [2024-10-08 18:15:59.054955] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.991 [2024-10-08 18:15:59.055522] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.600 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.600 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:06.600 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:06.600 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:06.600 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.600 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.600 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:06.966 [2024-10-08 18:15:59.946582] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:06.966 [2024-10-08 18:15:59.946664] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:06.966 [2024-10-08 18:15:59.946691] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:06.966 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:06.966 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c23cc751-750b-4a26-bcd5-e4bacb1ccc0b 00:08:06.966 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=c23cc751-750b-4a26-bcd5-e4bacb1ccc0b 00:08:06.966 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:06.966 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:06.966 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:06.966 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:06.966 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:06.966 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c23cc751-750b-4a26-bcd5-e4bacb1ccc0b -t 2000 00:08:07.226 [ 00:08:07.226 { 00:08:07.226 "name": "c23cc751-750b-4a26-bcd5-e4bacb1ccc0b", 00:08:07.226 "aliases": [ 00:08:07.226 "lvs/lvol" 00:08:07.226 ], 00:08:07.226 "product_name": "Logical Volume", 00:08:07.226 "block_size": 4096, 00:08:07.226 "num_blocks": 38912, 00:08:07.226 "uuid": "c23cc751-750b-4a26-bcd5-e4bacb1ccc0b", 00:08:07.226 "assigned_rate_limits": { 00:08:07.226 "rw_ios_per_sec": 0, 00:08:07.226 "rw_mbytes_per_sec": 0, 00:08:07.226 "r_mbytes_per_sec": 0, 00:08:07.226 "w_mbytes_per_sec": 0 00:08:07.226 }, 00:08:07.226 "claimed": false, 00:08:07.226 "zoned": false, 00:08:07.226 "supported_io_types": { 00:08:07.226 "read": true, 00:08:07.226 "write": true, 00:08:07.226 "unmap": true, 00:08:07.226 "flush": false, 00:08:07.226 "reset": true, 00:08:07.226 "nvme_admin": false, 00:08:07.226 "nvme_io": false, 00:08:07.226 "nvme_io_md": false, 00:08:07.226 "write_zeroes": true, 00:08:07.226 "zcopy": false, 00:08:07.226 "get_zone_info": false, 00:08:07.226 "zone_management": false, 00:08:07.226 "zone_append": false, 00:08:07.226 "compare": false, 00:08:07.226 "compare_and_write": false, 00:08:07.226 "abort": false, 00:08:07.226 "seek_hole": true, 00:08:07.226 "seek_data": true, 00:08:07.226 "copy": false, 00:08:07.226 "nvme_iov_md": false 00:08:07.226 }, 00:08:07.226 "driver_specific": { 00:08:07.226 "lvol": { 00:08:07.226 "lvol_store_uuid": "218384bf-3624-48e4-b967-181549c33242", 00:08:07.226 "base_bdev": "aio_bdev", 00:08:07.226 "thin_provision": false, 00:08:07.226 "num_allocated_clusters": 38, 00:08:07.226 "snapshot": false, 00:08:07.226 "clone": false, 00:08:07.226 "esnap_clone": false 00:08:07.226 } 00:08:07.226 } 00:08:07.226 } 00:08:07.226 ] 00:08:07.226 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:07.226 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 218384bf-3624-48e4-b967-181549c33242 00:08:07.226 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:07.484 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:07.484 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:07.484 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 218384bf-3624-48e4-b967-181549c33242 00:08:07.485 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:07.485 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:07.743 [2024-10-08 18:16:00.919627] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:07.743 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 218384bf-3624-48e4-b967-181549c33242 00:08:07.743 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:07.743 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 218384bf-3624-48e4-b967-181549c33242 00:08:07.743 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.744 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.744 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.744 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.744 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.744 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.744 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.744 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:07.744 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 218384bf-3624-48e4-b967-181549c33242 00:08:08.001 request: 00:08:08.001 { 00:08:08.001 "uuid": "218384bf-3624-48e4-b967-181549c33242", 00:08:08.001 "method": "bdev_lvol_get_lvstores", 00:08:08.001 "req_id": 1 00:08:08.001 } 00:08:08.002 Got JSON-RPC error response 00:08:08.002 response: 00:08:08.002 { 00:08:08.002 "code": -19, 00:08:08.002 "message": "No such device" 00:08:08.002 } 00:08:08.002 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:08.002 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.002 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:08.002 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.002 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:08.002 aio_bdev 00:08:08.260 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c23cc751-750b-4a26-bcd5-e4bacb1ccc0b 00:08:08.260 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=c23cc751-750b-4a26-bcd5-e4bacb1ccc0b 00:08:08.260 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:08.260 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:08.260 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:08.260 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:08.260 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:08.260 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c23cc751-750b-4a26-bcd5-e4bacb1ccc0b -t 2000 00:08:08.519 [ 00:08:08.519 { 00:08:08.519 "name": "c23cc751-750b-4a26-bcd5-e4bacb1ccc0b", 00:08:08.519 "aliases": [ 00:08:08.519 "lvs/lvol" 00:08:08.519 ], 00:08:08.519 "product_name": "Logical Volume", 00:08:08.519 "block_size": 4096, 00:08:08.519 "num_blocks": 38912, 00:08:08.519 "uuid": "c23cc751-750b-4a26-bcd5-e4bacb1ccc0b", 00:08:08.519 "assigned_rate_limits": { 00:08:08.519 "rw_ios_per_sec": 0, 00:08:08.519 "rw_mbytes_per_sec": 0, 00:08:08.519 "r_mbytes_per_sec": 0, 00:08:08.519 "w_mbytes_per_sec": 0 00:08:08.519 }, 00:08:08.519 "claimed": false, 00:08:08.519 "zoned": false, 00:08:08.519 "supported_io_types": { 00:08:08.519 "read": true, 00:08:08.519 "write": true, 00:08:08.519 "unmap": true, 00:08:08.519 "flush": false, 00:08:08.519 "reset": true, 00:08:08.519 "nvme_admin": false, 00:08:08.519 "nvme_io": false, 00:08:08.519 "nvme_io_md": false, 00:08:08.519 "write_zeroes": true, 00:08:08.519 "zcopy": false, 00:08:08.519 "get_zone_info": false, 00:08:08.519 "zone_management": false, 00:08:08.519 "zone_append": false, 00:08:08.519 "compare": false, 00:08:08.519 "compare_and_write": false, 00:08:08.519 "abort": false, 00:08:08.519 "seek_hole": true, 00:08:08.519 "seek_data": true, 00:08:08.519 "copy": false, 00:08:08.519 "nvme_iov_md": false 00:08:08.519 }, 00:08:08.519 "driver_specific": { 00:08:08.519 "lvol": { 00:08:08.519 "lvol_store_uuid": "218384bf-3624-48e4-b967-181549c33242", 00:08:08.519 "base_bdev": "aio_bdev", 00:08:08.519 "thin_provision": false, 00:08:08.519 "num_allocated_clusters": 38, 00:08:08.519 "snapshot": false, 00:08:08.519 "clone": false, 00:08:08.519 "esnap_clone": false 00:08:08.519 } 00:08:08.519 } 00:08:08.519 } 00:08:08.519 ] 00:08:08.519 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:08.519 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 218384bf-3624-48e4-b967-181549c33242 00:08:08.519 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:08.777 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:08.777 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 218384bf-3624-48e4-b967-181549c33242 00:08:08.777 18:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:08.777 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:08.777 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c23cc751-750b-4a26-bcd5-e4bacb1ccc0b 00:08:09.035 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 218384bf-3624-48e4-b967-181549c33242 00:08:09.294 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:09.553 00:08:09.553 real 0m18.018s 00:08:09.553 user 0m46.321s 00:08:09.553 sys 0m3.845s 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.553 ************************************ 00:08:09.553 END TEST lvs_grow_dirty 00:08:09.553 ************************************ 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:09.553 nvmf_trace.0 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:09.553 rmmod nvme_tcp 00:08:09.553 rmmod nvme_fabrics 00:08:09.553 rmmod nvme_keyring 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 277759 ']' 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 277759 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 277759 ']' 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 277759 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 277759 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 277759' 00:08:09.553 killing process with pid 277759 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 277759 00:08:09.553 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 277759 00:08:09.812 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:09.812 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:09.812 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:09.812 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:09.812 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:08:09.812 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:09.813 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:08:09.813 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:09.813 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:09.813 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.813 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.813 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:12.350 00:08:12.350 real 0m44.236s 00:08:12.350 user 1m8.489s 00:08:12.350 sys 0m10.481s 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:12.350 ************************************ 00:08:12.350 END TEST nvmf_lvs_grow 00:08:12.350 ************************************ 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.350 ************************************ 00:08:12.350 START TEST nvmf_bdev_io_wait 00:08:12.350 ************************************ 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:12.350 * Looking for test storage... 00:08:12.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:12.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.350 --rc genhtml_branch_coverage=1 00:08:12.350 --rc genhtml_function_coverage=1 00:08:12.350 --rc genhtml_legend=1 00:08:12.350 --rc geninfo_all_blocks=1 00:08:12.350 --rc geninfo_unexecuted_blocks=1 00:08:12.350 00:08:12.350 ' 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:12.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.350 --rc genhtml_branch_coverage=1 00:08:12.350 --rc genhtml_function_coverage=1 00:08:12.350 --rc genhtml_legend=1 00:08:12.350 --rc geninfo_all_blocks=1 00:08:12.350 --rc geninfo_unexecuted_blocks=1 00:08:12.350 00:08:12.350 ' 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:12.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.350 --rc genhtml_branch_coverage=1 00:08:12.350 --rc genhtml_function_coverage=1 00:08:12.350 --rc genhtml_legend=1 00:08:12.350 --rc geninfo_all_blocks=1 00:08:12.350 --rc geninfo_unexecuted_blocks=1 00:08:12.350 00:08:12.350 ' 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:12.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.350 --rc genhtml_branch_coverage=1 00:08:12.350 --rc genhtml_function_coverage=1 00:08:12.350 --rc genhtml_legend=1 00:08:12.350 --rc geninfo_all_blocks=1 00:08:12.350 --rc geninfo_unexecuted_blocks=1 00:08:12.350 00:08:12.350 ' 00:08:12.350 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:12.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:12.351 18:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:18.935 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.935 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:18.936 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:18.936 Found net devices under 0000:86:00.0: cvl_0_0 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:18.936 Found net devices under 0000:86:00.1: cvl_0_1 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:18.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:08:18.936 00:08:18.936 --- 10.0.0.2 ping statistics --- 00:08:18.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.936 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:08:18.936 00:08:18.936 --- 10.0.0.1 ping statistics --- 00:08:18.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.936 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=282058 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 282058 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 282058 ']' 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.936 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.936 [2024-10-08 18:16:11.486246] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:18.936 [2024-10-08 18:16:11.486289] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.936 [2024-10-08 18:16:11.556133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.936 [2024-10-08 18:16:11.629651] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.936 [2024-10-08 18:16:11.629696] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.936 [2024-10-08 18:16:11.629703] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.936 [2024-10-08 18:16:11.629709] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.936 [2024-10-08 18:16:11.629714] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.936 [2024-10-08 18:16:11.631309] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.936 [2024-10-08 18:16:11.631430] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.936 [2024-10-08 18:16:11.631478] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.936 [2024-10-08 18:16:11.631478] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.197 [2024-10-08 18:16:12.437934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.197 Malloc0 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.197 [2024-10-08 18:16:12.509921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=282269 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=282271 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:19.197 { 00:08:19.197 "params": { 00:08:19.197 "name": "Nvme$subsystem", 00:08:19.197 "trtype": "$TEST_TRANSPORT", 00:08:19.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:19.197 "adrfam": "ipv4", 00:08:19.197 "trsvcid": "$NVMF_PORT", 00:08:19.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:19.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:19.197 "hdgst": ${hdgst:-false}, 00:08:19.197 "ddgst": ${ddgst:-false} 00:08:19.197 }, 00:08:19.197 "method": "bdev_nvme_attach_controller" 00:08:19.197 } 00:08:19.197 EOF 00:08:19.197 )") 00:08:19.197 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:19.457 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=282273 00:08:19.457 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:19.457 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:19.457 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:19.457 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:19.457 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:19.457 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:19.457 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:19.457 { 00:08:19.457 "params": { 00:08:19.457 "name": "Nvme$subsystem", 00:08:19.457 "trtype": "$TEST_TRANSPORT", 00:08:19.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:19.457 "adrfam": "ipv4", 00:08:19.457 "trsvcid": "$NVMF_PORT", 00:08:19.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:19.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:19.458 "hdgst": ${hdgst:-false}, 00:08:19.458 "ddgst": ${ddgst:-false} 00:08:19.458 }, 00:08:19.458 "method": "bdev_nvme_attach_controller" 00:08:19.458 } 00:08:19.458 EOF 00:08:19.458 )") 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=282276 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:19.458 { 00:08:19.458 "params": { 00:08:19.458 "name": "Nvme$subsystem", 00:08:19.458 "trtype": "$TEST_TRANSPORT", 00:08:19.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:19.458 "adrfam": "ipv4", 00:08:19.458 "trsvcid": "$NVMF_PORT", 00:08:19.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:19.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:19.458 "hdgst": ${hdgst:-false}, 00:08:19.458 "ddgst": ${ddgst:-false} 00:08:19.458 }, 00:08:19.458 "method": "bdev_nvme_attach_controller" 00:08:19.458 } 00:08:19.458 EOF 00:08:19.458 )") 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:19.458 { 00:08:19.458 "params": { 00:08:19.458 "name": "Nvme$subsystem", 00:08:19.458 "trtype": "$TEST_TRANSPORT", 00:08:19.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:19.458 "adrfam": "ipv4", 00:08:19.458 "trsvcid": "$NVMF_PORT", 00:08:19.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:19.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:19.458 "hdgst": ${hdgst:-false}, 00:08:19.458 "ddgst": ${ddgst:-false} 00:08:19.458 }, 00:08:19.458 "method": "bdev_nvme_attach_controller" 00:08:19.458 } 00:08:19.458 EOF 00:08:19.458 )") 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 282269 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:19.458 "params": { 00:08:19.458 "name": "Nvme1", 00:08:19.458 "trtype": "tcp", 00:08:19.458 "traddr": "10.0.0.2", 00:08:19.458 "adrfam": "ipv4", 00:08:19.458 "trsvcid": "4420", 00:08:19.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:19.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:19.458 "hdgst": false, 00:08:19.458 "ddgst": false 00:08:19.458 }, 00:08:19.458 "method": "bdev_nvme_attach_controller" 00:08:19.458 }' 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:19.458 "params": { 00:08:19.458 "name": "Nvme1", 00:08:19.458 "trtype": "tcp", 00:08:19.458 "traddr": "10.0.0.2", 00:08:19.458 "adrfam": "ipv4", 00:08:19.458 "trsvcid": "4420", 00:08:19.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:19.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:19.458 "hdgst": false, 00:08:19.458 "ddgst": false 00:08:19.458 }, 00:08:19.458 "method": "bdev_nvme_attach_controller" 00:08:19.458 }' 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:19.458 "params": { 00:08:19.458 "name": "Nvme1", 00:08:19.458 "trtype": "tcp", 00:08:19.458 "traddr": "10.0.0.2", 00:08:19.458 "adrfam": "ipv4", 00:08:19.458 "trsvcid": "4420", 00:08:19.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:19.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:19.458 "hdgst": false, 00:08:19.458 "ddgst": false 00:08:19.458 }, 00:08:19.458 "method": "bdev_nvme_attach_controller" 00:08:19.458 }' 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:19.458 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:19.458 "params": { 00:08:19.458 "name": "Nvme1", 00:08:19.458 "trtype": "tcp", 00:08:19.458 "traddr": "10.0.0.2", 00:08:19.458 "adrfam": "ipv4", 00:08:19.458 "trsvcid": "4420", 00:08:19.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:19.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:19.458 "hdgst": false, 00:08:19.458 "ddgst": false 00:08:19.458 }, 00:08:19.458 "method": "bdev_nvme_attach_controller" 00:08:19.458 }' 00:08:19.458 [2024-10-08 18:16:12.560842] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:19.458 [2024-10-08 18:16:12.560888] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:19.458 [2024-10-08 18:16:12.561224] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:19.458 [2024-10-08 18:16:12.561263] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:19.458 [2024-10-08 18:16:12.563304] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:19.458 [2024-10-08 18:16:12.563326] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:19.458 [2024-10-08 18:16:12.563343] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:19.458 [2024-10-08 18:16:12.563368] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:19.458 [2024-10-08 18:16:12.745642] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.717 [2024-10-08 18:16:12.823686] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:08:19.717 [2024-10-08 18:16:12.836570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.717 [2024-10-08 18:16:12.913342] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:08:19.717 [2024-10-08 18:16:12.929029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.717 [2024-10-08 18:16:12.989587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.717 [2024-10-08 18:16:13.010390] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:08:19.976 [2024-10-08 18:16:13.066540] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:08:19.976 Running I/O for 1 seconds... 00:08:20.235 Running I/O for 1 seconds... 00:08:20.235 Running I/O for 1 seconds... 00:08:20.235 Running I/O for 1 seconds... 00:08:21.171 248928.00 IOPS, 972.38 MiB/s 00:08:21.171 Latency(us) 00:08:21.171 [2024-10-08T16:16:14.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.171 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:21.171 Nvme1n1 : 1.00 248553.27 970.91 0.00 0.00 512.18 255.51 1669.61 00:08:21.171 [2024-10-08T16:16:14.494Z] =================================================================================================================== 00:08:21.171 [2024-10-08T16:16:14.494Z] Total : 248553.27 970.91 0.00 0.00 512.18 255.51 1669.61 00:08:21.171 8417.00 IOPS, 32.88 MiB/s 00:08:21.171 Latency(us) 00:08:21.171 [2024-10-08T16:16:14.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.171 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:21.171 Nvme1n1 : 1.02 8382.47 32.74 0.00 0.00 15118.35 5617.37 26588.89 00:08:21.171 [2024-10-08T16:16:14.494Z] =================================================================================================================== 00:08:21.171 [2024-10-08T16:16:14.494Z] Total : 8382.47 32.74 0.00 0.00 15118.35 5617.37 26588.89 00:08:21.171 12641.00 IOPS, 49.38 MiB/s 00:08:21.171 Latency(us) 00:08:21.171 [2024-10-08T16:16:14.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.171 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:21.171 Nvme1n1 : 1.01 12701.32 49.61 0.00 0.00 10046.69 4369.07 20097.71 00:08:21.171 [2024-10-08T16:16:14.494Z] =================================================================================================================== 00:08:21.171 [2024-10-08T16:16:14.494Z] Total : 12701.32 49.61 0.00 0.00 10046.69 4369.07 20097.71 00:08:21.430 7908.00 IOPS, 30.89 MiB/s 00:08:21.430 Latency(us) 00:08:21.430 [2024-10-08T16:16:14.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.430 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:21.430 Nvme1n1 : 1.01 8005.14 31.27 0.00 0.00 15951.26 3526.46 39446.43 00:08:21.430 [2024-10-08T16:16:14.753Z] =================================================================================================================== 00:08:21.430 [2024-10-08T16:16:14.753Z] Total : 8005.14 31.27 0.00 0.00 15951.26 3526.46 39446.43 00:08:21.430 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 282271 00:08:21.430 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 282273 00:08:21.430 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 282276 00:08:21.430 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:21.430 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.430 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:21.690 rmmod nvme_tcp 00:08:21.690 rmmod nvme_fabrics 00:08:21.690 rmmod nvme_keyring 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 282058 ']' 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 282058 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 282058 ']' 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 282058 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 282058 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 282058' 00:08:21.690 killing process with pid 282058 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 282058 00:08:21.690 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 282058 00:08:21.949 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:21.949 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:21.949 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:21.949 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:21.949 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:08:21.949 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:21.949 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:08:21.949 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:21.950 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:21.950 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.950 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.950 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.854 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:23.854 00:08:23.854 real 0m11.917s 00:08:23.854 user 0m21.096s 00:08:23.854 sys 0m6.406s 00:08:23.854 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.854 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.854 ************************************ 00:08:23.854 END TEST nvmf_bdev_io_wait 00:08:23.854 ************************************ 00:08:23.854 18:16:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:23.854 18:16:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:23.854 18:16:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.854 18:16:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:24.114 ************************************ 00:08:24.114 START TEST nvmf_queue_depth 00:08:24.114 ************************************ 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:24.114 * Looking for test storage... 00:08:24.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:24.114 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:24.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.115 --rc genhtml_branch_coverage=1 00:08:24.115 --rc genhtml_function_coverage=1 00:08:24.115 --rc genhtml_legend=1 00:08:24.115 --rc geninfo_all_blocks=1 00:08:24.115 --rc geninfo_unexecuted_blocks=1 00:08:24.115 00:08:24.115 ' 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:24.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.115 --rc genhtml_branch_coverage=1 00:08:24.115 --rc genhtml_function_coverage=1 00:08:24.115 --rc genhtml_legend=1 00:08:24.115 --rc geninfo_all_blocks=1 00:08:24.115 --rc geninfo_unexecuted_blocks=1 00:08:24.115 00:08:24.115 ' 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:24.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.115 --rc genhtml_branch_coverage=1 00:08:24.115 --rc genhtml_function_coverage=1 00:08:24.115 --rc genhtml_legend=1 00:08:24.115 --rc geninfo_all_blocks=1 00:08:24.115 --rc geninfo_unexecuted_blocks=1 00:08:24.115 00:08:24.115 ' 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:24.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.115 --rc genhtml_branch_coverage=1 00:08:24.115 --rc genhtml_function_coverage=1 00:08:24.115 --rc genhtml_legend=1 00:08:24.115 --rc geninfo_all_blocks=1 00:08:24.115 --rc geninfo_unexecuted_blocks=1 00:08:24.115 00:08:24.115 ' 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:24.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:24.115 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.688 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:30.689 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:30.689 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:30.689 Found net devices under 0000:86:00.0: cvl_0_0 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:30.689 Found net devices under 0000:86:00.1: cvl_0_1 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:30.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:08:30.689 00:08:30.689 --- 10.0.0.2 ping statistics --- 00:08:30.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.689 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:08:30.689 00:08:30.689 --- 10.0.0.1 ping statistics --- 00:08:30.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.689 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=286295 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 286295 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 286295 ']' 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.689 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.689 [2024-10-08 18:16:23.511912] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:30.689 [2024-10-08 18:16:23.511955] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.690 [2024-10-08 18:16:23.586026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.690 [2024-10-08 18:16:23.655717] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.690 [2024-10-08 18:16:23.655757] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.690 [2024-10-08 18:16:23.655764] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.690 [2024-10-08 18:16:23.655771] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.690 [2024-10-08 18:16:23.655776] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.690 [2024-10-08 18:16:23.656345] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.259 [2024-10-08 18:16:24.372231] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.259 Malloc0 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.259 [2024-10-08 18:16:24.435993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=286537 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 286537 /var/tmp/bdevperf.sock 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 286537 ']' 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:31.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.259 18:16:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.259 [2024-10-08 18:16:24.487756] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:08:31.259 [2024-10-08 18:16:24.487799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286537 ] 00:08:31.259 [2024-10-08 18:16:24.556871] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.519 [2024-10-08 18:16:24.634958] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.087 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.087 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:32.087 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:32.087 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.087 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.087 NVMe0n1 00:08:32.087 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.087 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:32.346 Running I/O for 10 seconds... 00:08:34.223 12189.00 IOPS, 47.61 MiB/s [2024-10-08T16:16:28.933Z] 12283.00 IOPS, 47.98 MiB/s [2024-10-08T16:16:29.502Z] 12390.67 IOPS, 48.40 MiB/s [2024-10-08T16:16:30.883Z] 12511.00 IOPS, 48.87 MiB/s [2024-10-08T16:16:31.822Z] 12477.40 IOPS, 48.74 MiB/s [2024-10-08T16:16:32.760Z] 12540.00 IOPS, 48.98 MiB/s [2024-10-08T16:16:33.697Z] 12554.14 IOPS, 49.04 MiB/s [2024-10-08T16:16:34.634Z] 12562.00 IOPS, 49.07 MiB/s [2024-10-08T16:16:35.569Z] 12593.78 IOPS, 49.19 MiB/s [2024-10-08T16:16:35.569Z] 12578.50 IOPS, 49.13 MiB/s 00:08:42.246 Latency(us) 00:08:42.246 [2024-10-08T16:16:35.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.246 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:42.246 Verification LBA range: start 0x0 length 0x4000 00:08:42.246 NVMe0n1 : 10.05 12614.07 49.27 0.00 0.00 80926.87 13731.35 50930.83 00:08:42.246 [2024-10-08T16:16:35.569Z] =================================================================================================================== 00:08:42.246 [2024-10-08T16:16:35.569Z] Total : 12614.07 49.27 0.00 0.00 80926.87 13731.35 50930.83 00:08:42.246 { 00:08:42.246 "results": [ 00:08:42.246 { 00:08:42.246 "job": "NVMe0n1", 00:08:42.246 "core_mask": "0x1", 00:08:42.246 "workload": "verify", 00:08:42.246 "status": "finished", 00:08:42.246 "verify_range": { 00:08:42.246 "start": 0, 00:08:42.246 "length": 16384 00:08:42.246 }, 00:08:42.246 "queue_depth": 1024, 00:08:42.246 "io_size": 4096, 00:08:42.246 "runtime": 10.05171, 00:08:42.246 "iops": 12614.072630428056, 00:08:42.246 "mibps": 49.273721212609594, 00:08:42.246 "io_failed": 0, 00:08:42.246 "io_timeout": 0, 00:08:42.246 "avg_latency_us": 80926.8736969331, 00:08:42.246 "min_latency_us": 13731.352380952381, 00:08:42.246 "max_latency_us": 50930.834285714285 00:08:42.246 } 00:08:42.246 ], 00:08:42.246 "core_count": 1 00:08:42.246 } 00:08:42.505 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 286537 00:08:42.505 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 286537 ']' 00:08:42.505 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 286537 00:08:42.505 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:42.505 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.505 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 286537 00:08:42.505 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.505 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.505 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 286537' 00:08:42.505 killing process with pid 286537 00:08:42.505 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 286537 00:08:42.505 Received shutdown signal, test time was about 10.000000 seconds 00:08:42.505 00:08:42.505 Latency(us) 00:08:42.505 [2024-10-08T16:16:35.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.505 [2024-10-08T16:16:35.828Z] =================================================================================================================== 00:08:42.505 [2024-10-08T16:16:35.828Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:42.505 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 286537 00:08:42.505 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:42.505 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:42.505 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:42.505 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:42.505 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.764 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:42.765 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.765 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.765 rmmod nvme_tcp 00:08:42.765 rmmod nvme_fabrics 00:08:42.765 rmmod nvme_keyring 00:08:42.765 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.765 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:42.765 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:42.765 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 286295 ']' 00:08:42.765 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 286295 00:08:42.765 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 286295 ']' 00:08:42.765 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 286295 00:08:42.765 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:42.765 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.765 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 286295 00:08:42.765 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:42.765 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:42.765 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 286295' 00:08:42.765 killing process with pid 286295 00:08:42.765 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 286295 00:08:42.765 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 286295 00:08:43.024 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:43.024 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:43.024 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:43.024 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:43.024 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:43.025 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:43.025 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:43.025 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.025 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:43.025 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.025 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.025 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.931 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:44.931 00:08:44.931 real 0m21.016s 00:08:44.931 user 0m24.996s 00:08:44.931 sys 0m6.326s 00:08:44.931 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.931 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.931 ************************************ 00:08:44.931 END TEST nvmf_queue_depth 00:08:44.931 ************************************ 00:08:44.931 18:16:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:44.931 18:16:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:44.931 18:16:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.931 18:16:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.191 ************************************ 00:08:45.191 START TEST nvmf_target_multipath 00:08:45.191 ************************************ 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:45.191 * Looking for test storage... 00:08:45.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:45.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.191 --rc genhtml_branch_coverage=1 00:08:45.191 --rc genhtml_function_coverage=1 00:08:45.191 --rc genhtml_legend=1 00:08:45.191 --rc geninfo_all_blocks=1 00:08:45.191 --rc geninfo_unexecuted_blocks=1 00:08:45.191 00:08:45.191 ' 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:45.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.191 --rc genhtml_branch_coverage=1 00:08:45.191 --rc genhtml_function_coverage=1 00:08:45.191 --rc genhtml_legend=1 00:08:45.191 --rc geninfo_all_blocks=1 00:08:45.191 --rc geninfo_unexecuted_blocks=1 00:08:45.191 00:08:45.191 ' 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:45.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.191 --rc genhtml_branch_coverage=1 00:08:45.191 --rc genhtml_function_coverage=1 00:08:45.191 --rc genhtml_legend=1 00:08:45.191 --rc geninfo_all_blocks=1 00:08:45.191 --rc geninfo_unexecuted_blocks=1 00:08:45.191 00:08:45.191 ' 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:45.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.191 --rc genhtml_branch_coverage=1 00:08:45.191 --rc genhtml_function_coverage=1 00:08:45.191 --rc genhtml_legend=1 00:08:45.191 --rc geninfo_all_blocks=1 00:08:45.191 --rc geninfo_unexecuted_blocks=1 00:08:45.191 00:08:45.191 ' 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.191 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:45.192 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:51.766 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:51.766 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:51.766 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:51.767 Found net devices under 0000:86:00.0: cvl_0_0 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:51.767 Found net devices under 0000:86:00.1: cvl_0_1 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:51.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:08:51.767 00:08:51.767 --- 10.0.0.2 ping statistics --- 00:08:51.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.767 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:08:51.767 00:08:51.767 --- 10.0.0.1 ping statistics --- 00:08:51.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.767 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:51.767 only one NIC for nvmf test 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:51.767 rmmod nvme_tcp 00:08:51.767 rmmod nvme_fabrics 00:08:51.767 rmmod nvme_keyring 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.767 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:53.675 00:08:53.675 real 0m8.396s 00:08:53.675 user 0m1.815s 00:08:53.675 sys 0m4.587s 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:53.675 ************************************ 00:08:53.675 END TEST nvmf_target_multipath 00:08:53.675 ************************************ 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.675 ************************************ 00:08:53.675 START TEST nvmf_zcopy 00:08:53.675 ************************************ 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:53.675 * Looking for test storage... 00:08:53.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.675 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:53.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.675 --rc genhtml_branch_coverage=1 00:08:53.676 --rc genhtml_function_coverage=1 00:08:53.676 --rc genhtml_legend=1 00:08:53.676 --rc geninfo_all_blocks=1 00:08:53.676 --rc geninfo_unexecuted_blocks=1 00:08:53.676 00:08:53.676 ' 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.676 --rc genhtml_branch_coverage=1 00:08:53.676 --rc genhtml_function_coverage=1 00:08:53.676 --rc genhtml_legend=1 00:08:53.676 --rc geninfo_all_blocks=1 00:08:53.676 --rc geninfo_unexecuted_blocks=1 00:08:53.676 00:08:53.676 ' 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.676 --rc genhtml_branch_coverage=1 00:08:53.676 --rc genhtml_function_coverage=1 00:08:53.676 --rc genhtml_legend=1 00:08:53.676 --rc geninfo_all_blocks=1 00:08:53.676 --rc geninfo_unexecuted_blocks=1 00:08:53.676 00:08:53.676 ' 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.676 --rc genhtml_branch_coverage=1 00:08:53.676 --rc genhtml_function_coverage=1 00:08:53.676 --rc genhtml_legend=1 00:08:53.676 --rc geninfo_all_blocks=1 00:08:53.676 --rc geninfo_unexecuted_blocks=1 00:08:53.676 00:08:53.676 ' 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:53.676 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.252 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:00.253 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:00.253 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:00.253 Found net devices under 0000:86:00.0: cvl_0_0 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:00.253 Found net devices under 0000:86:00.1: cvl_0_1 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:00.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:09:00.253 00:09:00.253 --- 10.0.0.2 ping statistics --- 00:09:00.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.253 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:09:00.253 00:09:00.253 --- 10.0.0.1 ping statistics --- 00:09:00.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.253 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:00.253 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:00.253 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:00.253 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:00.253 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:00.253 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.253 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=295450 00:09:00.253 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:00.253 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 295450 00:09:00.253 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 295450 ']' 00:09:00.253 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.253 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.253 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.253 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.253 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.253 [2024-10-08 18:16:53.066041] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:09:00.253 [2024-10-08 18:16:53.066090] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.253 [2024-10-08 18:16:53.138586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.253 [2024-10-08 18:16:53.215486] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.254 [2024-10-08 18:16:53.215523] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.254 [2024-10-08 18:16:53.215530] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.254 [2024-10-08 18:16:53.215536] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.254 [2024-10-08 18:16:53.215541] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.254 [2024-10-08 18:16:53.216087] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.821 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.821 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:00.821 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:00.821 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:00.821 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.821 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.821 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:00.821 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.822 [2024-10-08 18:16:53.936641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.822 [2024-10-08 18:16:53.956799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.822 malloc0 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.822 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.822 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.822 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:00.822 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:00.822 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:00.822 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:00.822 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:00.822 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:00.822 { 00:09:00.822 "params": { 00:09:00.822 "name": "Nvme$subsystem", 00:09:00.822 "trtype": "$TEST_TRANSPORT", 00:09:00.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.822 "adrfam": "ipv4", 00:09:00.822 "trsvcid": "$NVMF_PORT", 00:09:00.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.822 "hdgst": ${hdgst:-false}, 00:09:00.822 "ddgst": ${ddgst:-false} 00:09:00.822 }, 00:09:00.822 "method": "bdev_nvme_attach_controller" 00:09:00.822 } 00:09:00.822 EOF 00:09:00.822 )") 00:09:00.822 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:00.822 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:00.822 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:00.822 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:00.822 "params": { 00:09:00.822 "name": "Nvme1", 00:09:00.822 "trtype": "tcp", 00:09:00.822 "traddr": "10.0.0.2", 00:09:00.822 "adrfam": "ipv4", 00:09:00.822 "trsvcid": "4420", 00:09:00.822 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.822 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.822 "hdgst": false, 00:09:00.822 "ddgst": false 00:09:00.822 }, 00:09:00.822 "method": "bdev_nvme_attach_controller" 00:09:00.822 }' 00:09:00.822 [2024-10-08 18:16:54.054244] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:09:00.822 [2024-10-08 18:16:54.054285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid295695 ] 00:09:00.822 [2024-10-08 18:16:54.121109] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.081 [2024-10-08 18:16:54.194925] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.081 Running I/O for 10 seconds... 00:09:03.394 8631.00 IOPS, 67.43 MiB/s [2024-10-08T16:16:57.725Z] 8706.00 IOPS, 68.02 MiB/s [2024-10-08T16:16:58.699Z] 8724.00 IOPS, 68.16 MiB/s [2024-10-08T16:16:59.635Z] 8722.50 IOPS, 68.14 MiB/s [2024-10-08T16:17:00.572Z] 8737.60 IOPS, 68.26 MiB/s [2024-10-08T16:17:01.508Z] 8741.83 IOPS, 68.30 MiB/s [2024-10-08T16:17:02.445Z] 8754.14 IOPS, 68.39 MiB/s [2024-10-08T16:17:03.821Z] 8760.25 IOPS, 68.44 MiB/s [2024-10-08T16:17:04.758Z] 8765.00 IOPS, 68.48 MiB/s [2024-10-08T16:17:04.758Z] 8768.60 IOPS, 68.50 MiB/s 00:09:11.435 Latency(us) 00:09:11.435 [2024-10-08T16:17:04.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.435 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:11.435 Verification LBA range: start 0x0 length 0x1000 00:09:11.435 Nvme1n1 : 10.01 8771.12 68.52 0.00 0.00 14552.27 2418.59 21720.50 00:09:11.435 [2024-10-08T16:17:04.758Z] =================================================================================================================== 00:09:11.435 [2024-10-08T16:17:04.758Z] Total : 8771.12 68.52 0.00 0.00 14552.27 2418.59 21720.50 00:09:11.435 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=297433 00:09:11.435 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:11.435 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.435 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:11.435 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:11.435 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:11.435 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:11.435 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:11.435 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:11.435 { 00:09:11.435 "params": { 00:09:11.435 "name": "Nvme$subsystem", 00:09:11.435 "trtype": "$TEST_TRANSPORT", 00:09:11.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.435 "adrfam": "ipv4", 00:09:11.435 "trsvcid": "$NVMF_PORT", 00:09:11.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.435 "hdgst": ${hdgst:-false}, 00:09:11.435 "ddgst": ${ddgst:-false} 00:09:11.435 }, 00:09:11.435 "method": "bdev_nvme_attach_controller" 00:09:11.435 } 00:09:11.435 EOF 00:09:11.435 )") 00:09:11.435 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:11.435 [2024-10-08 18:17:04.621617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.435 [2024-10-08 18:17:04.621654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.435 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:11.435 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:11.435 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:11.435 "params": { 00:09:11.435 "name": "Nvme1", 00:09:11.435 "trtype": "tcp", 00:09:11.435 "traddr": "10.0.0.2", 00:09:11.435 "adrfam": "ipv4", 00:09:11.435 "trsvcid": "4420", 00:09:11.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.435 "hdgst": false, 00:09:11.435 "ddgst": false 00:09:11.435 }, 00:09:11.435 "method": "bdev_nvme_attach_controller" 00:09:11.435 }' 00:09:11.435 [2024-10-08 18:17:04.633619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.435 [2024-10-08 18:17:04.633632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.435 [2024-10-08 18:17:04.645645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.435 [2024-10-08 18:17:04.645655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.435 [2024-10-08 18:17:04.657679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.435 [2024-10-08 18:17:04.657689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.435 [2024-10-08 18:17:04.658683] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:09:11.435 [2024-10-08 18:17:04.658724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid297433 ] 00:09:11.435 [2024-10-08 18:17:04.669709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.435 [2024-10-08 18:17:04.669719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.435 [2024-10-08 18:17:04.681744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.435 [2024-10-08 18:17:04.681757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.435 [2024-10-08 18:17:04.693775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.435 [2024-10-08 18:17:04.693785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.435 [2024-10-08 18:17:04.705805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.435 [2024-10-08 18:17:04.705814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.435 [2024-10-08 18:17:04.717837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.436 [2024-10-08 18:17:04.717846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.436 [2024-10-08 18:17:04.725252] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.436 [2024-10-08 18:17:04.729870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.436 [2024-10-08 18:17:04.729881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.436 [2024-10-08 18:17:04.741900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.436 [2024-10-08 18:17:04.741913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.436 [2024-10-08 18:17:04.753937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.436 [2024-10-08 18:17:04.753947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.765969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.765991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.777997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.778010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.790027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.790036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.799929] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.695 [2024-10-08 18:17:04.802057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.802069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.814098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.814115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.826131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.826148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.838158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.838172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.850190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.850202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.862221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.862234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.874257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.874265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.886288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.886296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.898336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.898357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.910362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.910381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.922393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.922406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.934428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.934442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.946456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.946465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.958509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.958527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 Running I/O for 5 seconds... 00:09:11.695 [2024-10-08 18:17:04.973620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.973639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.988702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.988721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:04.999112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:04.999134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.695 [2024-10-08 18:17:05.007985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.695 [2024-10-08 18:17:05.008004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.017600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.017625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.026914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.026933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.041204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.041223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.054624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.054643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.068472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.068491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.083063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.083086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.093833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.093851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.107794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.107813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.121985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.122004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.129512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.129531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.138637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.138655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.147988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.148007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.162430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.162448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.176441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.176460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.190058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.190078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.198787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.198804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.207968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.207986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.222388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.222407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.231395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.231414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.240173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.240190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.248940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.248958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.954 [2024-10-08 18:17:05.263358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.954 [2024-10-08 18:17:05.263384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.277274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.277294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.286273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.286291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.294989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.295007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.304210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.304227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.313291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.313308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.327806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.327824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.335359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.335382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.345461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.345479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.355121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.355139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.364432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.364449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.378806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.378824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.386424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.386442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.396537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.396555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.405737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.405755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.414968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.414987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.429012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.429031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.442607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.442630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.456143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.456162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.464993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.465011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.474279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.474298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.483699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.483717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.492910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.492927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.501992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.502009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.511506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.511524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.520768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.520786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.213 [2024-10-08 18:17:05.530407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.213 [2024-10-08 18:17:05.530426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.540007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.540027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.549161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.549180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.558751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.558769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.567323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.567341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.581789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.581807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.590655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.590673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.599934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.599951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.609240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.609258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.618698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.618716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.633125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.633144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.646949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.646968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.655601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.655619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.664878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.664896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.674215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.674232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.688391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.688410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.697443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.697462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.711245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.711264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.720140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.720161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.728724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.728742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.742956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.742974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.751530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.751548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.760668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.760686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.769750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.769767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.778792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.778810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.473 [2024-10-08 18:17:05.793095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.473 [2024-10-08 18:17:05.793113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.732 [2024-10-08 18:17:05.806500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.806519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:05.815092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.815110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:05.824351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.824381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:05.833783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.833802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:05.847895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.847913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:05.856825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.856843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:05.866776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.866795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:05.875347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.875365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:05.884740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.884758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:05.899237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.899256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:05.908332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.908351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:05.917753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.917771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:05.926710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.926728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:05.941139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.941157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:05.955213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.955232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 16750.00 IOPS, 130.86 MiB/s [2024-10-08T16:17:06.056Z] [2024-10-08 18:17:05.969026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.969046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:05.982629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.982648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:05.996872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:05.996891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:06.011786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:06.011805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:06.025868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:06.025887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:06.039581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:06.039600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.733 [2024-10-08 18:17:06.048738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.733 [2024-10-08 18:17:06.048761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.992 [2024-10-08 18:17:06.062808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.992 [2024-10-08 18:17:06.062828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.992 [2024-10-08 18:17:06.071537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.992 [2024-10-08 18:17:06.071556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.992 [2024-10-08 18:17:06.085850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.992 [2024-10-08 18:17:06.085868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.992 [2024-10-08 18:17:06.094651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.992 [2024-10-08 18:17:06.094670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.992 [2024-10-08 18:17:06.103530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.992 [2024-10-08 18:17:06.103549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.992 [2024-10-08 18:17:06.112839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.992 [2024-10-08 18:17:06.112858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.992 [2024-10-08 18:17:06.122036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.992 [2024-10-08 18:17:06.122053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.992 [2024-10-08 18:17:06.136136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.992 [2024-10-08 18:17:06.136155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.992 [2024-10-08 18:17:06.144763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.992 [2024-10-08 18:17:06.144781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.992 [2024-10-08 18:17:06.153932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.992 [2024-10-08 18:17:06.153950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.992 [2024-10-08 18:17:06.163350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.992 [2024-10-08 18:17:06.163369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.992 [2024-10-08 18:17:06.172732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.992 [2024-10-08 18:17:06.172750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.992 [2024-10-08 18:17:06.187344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.992 [2024-10-08 18:17:06.187361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.992 [2024-10-08 18:17:06.196056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.992 [2024-10-08 18:17:06.196074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.992 [2024-10-08 18:17:06.205400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.993 [2024-10-08 18:17:06.205417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.993 [2024-10-08 18:17:06.220061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.993 [2024-10-08 18:17:06.220085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.993 [2024-10-08 18:17:06.233709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.993 [2024-10-08 18:17:06.233727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.993 [2024-10-08 18:17:06.247343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.993 [2024-10-08 18:17:06.247365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.993 [2024-10-08 18:17:06.256110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.993 [2024-10-08 18:17:06.256133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.993 [2024-10-08 18:17:06.265315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.993 [2024-10-08 18:17:06.265334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.993 [2024-10-08 18:17:06.274295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.993 [2024-10-08 18:17:06.274313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.993 [2024-10-08 18:17:06.283662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.993 [2024-10-08 18:17:06.283681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.993 [2024-10-08 18:17:06.297887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.993 [2024-10-08 18:17:06.297905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.993 [2024-10-08 18:17:06.311176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.993 [2024-10-08 18:17:06.311194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.252 [2024-10-08 18:17:06.320062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.252 [2024-10-08 18:17:06.320080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.252 [2024-10-08 18:17:06.329233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.252 [2024-10-08 18:17:06.329250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.252 [2024-10-08 18:17:06.338322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.252 [2024-10-08 18:17:06.338340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.252 [2024-10-08 18:17:06.352834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.252 [2024-10-08 18:17:06.352852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.252 [2024-10-08 18:17:06.366608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.252 [2024-10-08 18:17:06.366626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.252 [2024-10-08 18:17:06.380413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.252 [2024-10-08 18:17:06.380430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.252 [2024-10-08 18:17:06.389219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.252 [2024-10-08 18:17:06.389237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.252 [2024-10-08 18:17:06.398552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.252 [2024-10-08 18:17:06.398570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.252 [2024-10-08 18:17:06.412928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.252 [2024-10-08 18:17:06.412946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.252 [2024-10-08 18:17:06.427386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.252 [2024-10-08 18:17:06.427406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.252 [2024-10-08 18:17:06.435223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.252 [2024-10-08 18:17:06.435241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.252 [2024-10-08 18:17:06.448804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.252 [2024-10-08 18:17:06.448822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.252 [2024-10-08 18:17:06.458259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.252 [2024-10-08 18:17:06.458277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.252 [2024-10-08 18:17:06.472729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.252 [2024-10-08 18:17:06.472748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.252 [2024-10-08 18:17:06.481464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.253 [2024-10-08 18:17:06.481482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.253 [2024-10-08 18:17:06.490800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.253 [2024-10-08 18:17:06.490817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.253 [2024-10-08 18:17:06.499905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.253 [2024-10-08 18:17:06.499922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.253 [2024-10-08 18:17:06.509641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.253 [2024-10-08 18:17:06.509659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.253 [2024-10-08 18:17:06.523923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.253 [2024-10-08 18:17:06.523941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.253 [2024-10-08 18:17:06.538119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.253 [2024-10-08 18:17:06.538138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.253 [2024-10-08 18:17:06.551594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.253 [2024-10-08 18:17:06.551612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.253 [2024-10-08 18:17:06.560381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.253 [2024-10-08 18:17:06.560399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.253 [2024-10-08 18:17:06.569432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.253 [2024-10-08 18:17:06.569450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.584215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.584234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.593705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.593723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.603147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.603165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.612257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.612274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.621894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.621913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.636625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.636643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.650449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.650468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.659523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.659541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.673655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.673673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.687240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.687259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.701157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.701176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.715316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.715334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.726770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.726788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.740822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.740840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.749687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.749705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.763857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.763875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.773067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.773085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.782548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.782574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.791714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.791732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.801071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.801088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.815494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.815512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.512 [2024-10-08 18:17:06.829152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.512 [2024-10-08 18:17:06.829170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.771 [2024-10-08 18:17:06.837915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.771 [2024-10-08 18:17:06.837934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.771 [2024-10-08 18:17:06.847262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.771 [2024-10-08 18:17:06.847280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.771 [2024-10-08 18:17:06.856681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.771 [2024-10-08 18:17:06.856698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.771 [2024-10-08 18:17:06.871213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.771 [2024-10-08 18:17:06.871231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.771 [2024-10-08 18:17:06.880222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.771 [2024-10-08 18:17:06.880251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.771 [2024-10-08 18:17:06.889052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.771 [2024-10-08 18:17:06.889069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.771 [2024-10-08 18:17:06.898261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.771 [2024-10-08 18:17:06.898278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.771 [2024-10-08 18:17:06.912836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.771 [2024-10-08 18:17:06.912854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.771 [2024-10-08 18:17:06.926989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.771 [2024-10-08 18:17:06.927007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.771 [2024-10-08 18:17:06.940776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.771 [2024-10-08 18:17:06.940794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.771 [2024-10-08 18:17:06.949785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.771 [2024-10-08 18:17:06.949803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.771 [2024-10-08 18:17:06.958747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.771 [2024-10-08 18:17:06.958770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.771 [2024-10-08 18:17:06.967884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.771 [2024-10-08 18:17:06.967901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.771 16832.50 IOPS, 131.50 MiB/s [2024-10-08T16:17:07.095Z] [2024-10-08 18:17:06.982597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.772 [2024-10-08 18:17:06.982615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.772 [2024-10-08 18:17:06.990263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.772 [2024-10-08 18:17:06.990280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.772 [2024-10-08 18:17:06.999265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.772 [2024-10-08 18:17:06.999283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.772 [2024-10-08 18:17:07.013674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.772 [2024-10-08 18:17:07.013692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.772 [2024-10-08 18:17:07.027647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.772 [2024-10-08 18:17:07.027665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.772 [2024-10-08 18:17:07.038056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.772 [2024-10-08 18:17:07.038074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.772 [2024-10-08 18:17:07.047353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.772 [2024-10-08 18:17:07.047370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.772 [2024-10-08 18:17:07.056509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.772 [2024-10-08 18:17:07.056527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.772 [2024-10-08 18:17:07.065690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.772 [2024-10-08 18:17:07.065708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.772 [2024-10-08 18:17:07.074704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.772 [2024-10-08 18:17:07.074722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.772 [2024-10-08 18:17:07.089305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.772 [2024-10-08 18:17:07.089323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.031 [2024-10-08 18:17:07.098359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.031 [2024-10-08 18:17:07.098386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.031 [2024-10-08 18:17:07.107681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.031 [2024-10-08 18:17:07.107702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.031 [2024-10-08 18:17:07.121634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.031 [2024-10-08 18:17:07.121652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.031 [2024-10-08 18:17:07.130644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.130661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.144827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.144846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.157976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.157993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.171886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.171905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.180855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.180873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.195247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.195266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.209257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.209277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.218772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.218791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.228823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.228841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.238209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.238227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.247222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.247240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.261561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.261581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.270222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.270241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.279792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.279812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.293806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.293825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.302903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.302922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.317218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.317241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.326107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.326126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.335443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.335461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.032 [2024-10-08 18:17:07.345393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.032 [2024-10-08 18:17:07.345412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.291 [2024-10-08 18:17:07.354801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.291 [2024-10-08 18:17:07.354820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.291 [2024-10-08 18:17:07.368917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.291 [2024-10-08 18:17:07.368936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.291 [2024-10-08 18:17:07.382148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.291 [2024-10-08 18:17:07.382168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.291 [2024-10-08 18:17:07.391410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.291 [2024-10-08 18:17:07.391429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.291 [2024-10-08 18:17:07.405323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.291 [2024-10-08 18:17:07.405342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.419282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.419301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.433165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.433183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.442871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.442889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.452462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.452480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.461908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.461926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.471828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.471846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.486110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.486129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.495010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.495027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.504354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.504372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.514267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.514286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.523611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.523633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.537675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.537694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.546354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.546373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.555585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.555604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.565017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.565034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.574260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.574277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.589004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.589022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.599987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.600004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.292 [2024-10-08 18:17:07.609003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.292 [2024-10-08 18:17:07.609021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.551 [2024-10-08 18:17:07.618211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.551 [2024-10-08 18:17:07.618230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.551 [2024-10-08 18:17:07.627335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.551 [2024-10-08 18:17:07.627353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.551 [2024-10-08 18:17:07.641345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.551 [2024-10-08 18:17:07.641363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.551 [2024-10-08 18:17:07.655174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.655192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.664246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.664264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.672890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.672907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.682353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.682372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.696872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.696890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.710291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.710309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.719156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.719174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.728488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.728511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.738124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.738142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.752437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.752455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.761469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.761487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.770524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.770542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.779719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.779737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.794086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.794104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.807655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.807673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.821657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.821676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.832664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.832683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.842054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.842072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.856432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.856451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.552 [2024-10-08 18:17:07.870408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.552 [2024-10-08 18:17:07.870427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:07.879412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:07.879430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:07.888847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:07.888864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:07.898136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:07.898154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:07.906663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:07.906681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:07.921195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:07.921213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:07.934715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:07.934733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:07.943509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:07.943526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:07.953208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:07.953226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:07.962199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:07.962216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 16855.33 IOPS, 131.68 MiB/s [2024-10-08T16:17:08.134Z] [2024-10-08 18:17:07.976765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:07.976783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:07.985676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:07.985693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:07.994576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:07.994595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:08.003687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:08.003705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:08.012877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:08.012895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:08.027715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:08.027734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:08.036720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:08.036739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:08.045931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:08.045949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:08.055171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:08.055189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:08.064412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:08.064430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:08.078830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:08.078849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:08.086564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:08.086581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:08.095615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:08.095637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:08.104171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:08.104189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:08.113136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:08.113153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.811 [2024-10-08 18:17:08.127967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.811 [2024-10-08 18:17:08.127990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.135420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.135438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.149660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.149679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.160491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.160509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.169649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.169667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.184349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.184367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.198373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.198397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.207386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.207404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.221837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.221855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.230769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.230788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.244950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.244968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.258184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.258203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.271957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.271975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.285603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.285621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.294661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.294680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.308669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.308689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.317631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.317649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.326960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.326978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.335700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.335718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.345143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.345162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.360211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.360230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.370731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.370749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.379640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.379658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.071 [2024-10-08 18:17:08.389106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.071 [2024-10-08 18:17:08.389125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.403648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.403668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.417315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.417334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.426215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.426233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.434808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.434827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.444044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.444062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.452742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.452760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.467169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.467189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.475962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.475981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.485104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.485122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.494475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.494493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.503690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.503707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.518056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.518074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.527066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.527085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.536931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.536949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.546461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.546483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.560651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.560671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.573853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.573873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.582575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.582595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.591852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.591871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.601039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.601058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.610100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.610119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.624529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.624548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.638087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.638105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.330 [2024-10-08 18:17:08.651635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.330 [2024-10-08 18:17:08.651654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.661568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.661587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.670874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.670892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.685030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.685049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.692642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.692661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.701717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.701736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.710283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.710302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.719542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.719561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.734313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.734332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.744732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.744751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.758964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.758991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.772869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.772889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.781827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.781845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.795775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.795793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.804545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.804563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.813933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.813952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.822742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.822759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.832120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.832138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.846586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.846605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.855388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.855422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.869870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.869889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.883656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.883675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.892470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.892488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.590 [2024-10-08 18:17:08.906642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.590 [2024-10-08 18:17:08.906661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.849 [2024-10-08 18:17:08.915473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:08.915492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:08.923998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:08.924016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:08.938419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:08.938437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:08.951922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:08.951941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:08.965887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:08.965906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 16871.50 IOPS, 131.81 MiB/s [2024-10-08T16:17:09.173Z] [2024-10-08 18:17:08.974691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:08.974714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:08.989159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:08.989177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:08.998238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:08.998256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:09.012358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:09.012381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:09.026302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:09.026320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:09.035067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:09.035085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:09.044357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:09.044379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:09.052957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:09.052975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:09.062470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:09.062488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:09.071967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:09.071986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:09.086197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:09.086215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:09.095221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:09.095239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:09.104613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:09.104631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:09.113550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:09.113567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:09.127463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:09.127482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:09.141361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:09.141385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:09.155235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:09.155253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.850 [2024-10-08 18:17:09.164269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.850 [2024-10-08 18:17:09.164288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.109 [2024-10-08 18:17:09.178565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.109 [2024-10-08 18:17:09.178584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.109 [2024-10-08 18:17:09.192171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.109 [2024-10-08 18:17:09.192189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.109 [2024-10-08 18:17:09.201204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.109 [2024-10-08 18:17:09.201222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.109 [2024-10-08 18:17:09.209771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.109 [2024-10-08 18:17:09.209788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.109 [2024-10-08 18:17:09.219097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.109 [2024-10-08 18:17:09.219116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.228713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.228731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.243151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.243170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.256868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.256886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.265839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.265857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.275220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.275237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.284724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.284742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.299310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.299328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.308382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.308400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.317694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.317713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.327018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.327036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.336281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.336300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.350400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.350419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.359385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.359403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.368599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.368618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.378323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.378341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.387056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.387074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.401592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.401610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.410382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.410399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.110 [2024-10-08 18:17:09.419630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.110 [2024-10-08 18:17:09.419647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.369 [2024-10-08 18:17:09.434337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.369 [2024-10-08 18:17:09.434357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.369 [2024-10-08 18:17:09.445374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.369 [2024-10-08 18:17:09.445399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.369 [2024-10-08 18:17:09.459862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.369 [2024-10-08 18:17:09.459880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.369 [2024-10-08 18:17:09.468711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.369 [2024-10-08 18:17:09.468729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.369 [2024-10-08 18:17:09.477438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.369 [2024-10-08 18:17:09.477457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.369 [2024-10-08 18:17:09.491575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.369 [2024-10-08 18:17:09.491594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.369 [2024-10-08 18:17:09.505513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.369 [2024-10-08 18:17:09.505532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.369 [2024-10-08 18:17:09.518713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.369 [2024-10-08 18:17:09.518732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.369 [2024-10-08 18:17:09.528048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.369 [2024-10-08 18:17:09.528066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.369 [2024-10-08 18:17:09.537993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.369 [2024-10-08 18:17:09.538011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.370 [2024-10-08 18:17:09.546707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.370 [2024-10-08 18:17:09.546725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.370 [2024-10-08 18:17:09.556026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.370 [2024-10-08 18:17:09.556044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.370 [2024-10-08 18:17:09.570200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.370 [2024-10-08 18:17:09.570219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.370 [2024-10-08 18:17:09.578859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.370 [2024-10-08 18:17:09.578877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.370 [2024-10-08 18:17:09.588067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.370 [2024-10-08 18:17:09.588086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.370 [2024-10-08 18:17:09.594954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.370 [2024-10-08 18:17:09.594972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.370 [2024-10-08 18:17:09.605874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.370 [2024-10-08 18:17:09.605892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.370 [2024-10-08 18:17:09.620141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.370 [2024-10-08 18:17:09.620161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.370 [2024-10-08 18:17:09.629058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.370 [2024-10-08 18:17:09.629076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.370 [2024-10-08 18:17:09.638605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.370 [2024-10-08 18:17:09.638624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.370 [2024-10-08 18:17:09.648021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.370 [2024-10-08 18:17:09.648039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.370 [2024-10-08 18:17:09.657263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.370 [2024-10-08 18:17:09.657281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.370 [2024-10-08 18:17:09.671972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.370 [2024-10-08 18:17:09.671991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.370 [2024-10-08 18:17:09.680824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.370 [2024-10-08 18:17:09.680842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.370 [2024-10-08 18:17:09.690192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.370 [2024-10-08 18:17:09.690212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.699540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.699559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.708141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.708158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.722363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.722388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.736570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.736588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.747568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.747587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.761721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.761739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.770491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.770510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.784952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.784970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.793837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.793859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.803000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.803018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.811540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.811558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.820512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.820531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.835072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.835090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.849136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.849154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.860144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.860162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.869454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.869472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.878746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.878765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.892996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.893014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.901710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.901728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.910880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.910898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.920237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.920255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.929453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.929471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.629 [2024-10-08 18:17:09.943966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.629 [2024-10-08 18:17:09.943986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.889 [2024-10-08 18:17:09.958182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.889 [2024-10-08 18:17:09.958202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.889 [2024-10-08 18:17:09.968971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.889 [2024-10-08 18:17:09.968990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.889 16882.40 IOPS, 131.89 MiB/s [2024-10-08T16:17:10.212Z] [2024-10-08 18:17:09.977709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.889 [2024-10-08 18:17:09.977728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.889 00:09:16.889 Latency(us) 00:09:16.889 [2024-10-08T16:17:10.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.889 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:16.890 Nvme1n1 : 5.01 16884.61 131.91 0.00 0.00 7574.35 3339.22 15603.81 00:09:16.890 [2024-10-08T16:17:10.213Z] =================================================================================================================== 00:09:16.890 [2024-10-08T16:17:10.213Z] Total : 16884.61 131.91 0.00 0.00 7574.35 3339.22 15603.81 00:09:16.890 [2024-10-08 18:17:09.988168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.890 [2024-10-08 18:17:09.988186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.890 [2024-10-08 18:17:10.000195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.890 [2024-10-08 18:17:10.000211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.890 [2024-10-08 18:17:10.012239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.890 [2024-10-08 18:17:10.012255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.890 [2024-10-08 18:17:10.024263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.890 [2024-10-08 18:17:10.024281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.890 [2024-10-08 18:17:10.032278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.890 [2024-10-08 18:17:10.032291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.890 [2024-10-08 18:17:10.044315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.890 [2024-10-08 18:17:10.044329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.890 [2024-10-08 18:17:10.064364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.890 [2024-10-08 18:17:10.064387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.890 [2024-10-08 18:17:10.072385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.890 [2024-10-08 18:17:10.072400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.890 [2024-10-08 18:17:10.080405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.890 [2024-10-08 18:17:10.080420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.890 [2024-10-08 18:17:10.092436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.890 [2024-10-08 18:17:10.092451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.890 [2024-10-08 18:17:10.104468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.890 [2024-10-08 18:17:10.104478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.890 [2024-10-08 18:17:10.116505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.890 [2024-10-08 18:17:10.116518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.890 [2024-10-08 18:17:10.128531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.890 [2024-10-08 18:17:10.128543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.890 [2024-10-08 18:17:10.140562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.890 [2024-10-08 18:17:10.140573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.890 [2024-10-08 18:17:10.152606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.890 [2024-10-08 18:17:10.152626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.890 [2024-10-08 18:17:10.164625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.890 [2024-10-08 18:17:10.164635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (297433) - No such process 00:09:16.890 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 297433 00:09:16.890 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.890 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.890 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.890 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.890 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:16.890 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.890 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.890 delay0 00:09:16.890 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.890 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:16.890 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.890 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.890 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.890 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:17.149 [2024-10-08 18:17:10.305012] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:25.270 Initializing NVMe Controllers 00:09:25.270 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:25.270 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:25.270 Initialization complete. Launching workers. 00:09:25.270 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5714 00:09:25.270 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5998, failed to submit 36 00:09:25.270 success 5805, unsuccessful 193, failed 0 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.270 rmmod nvme_tcp 00:09:25.270 rmmod nvme_fabrics 00:09:25.270 rmmod nvme_keyring 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 295450 ']' 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 295450 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 295450 ']' 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 295450 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 295450 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 295450' 00:09:25.270 killing process with pid 295450 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 295450 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 295450 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.270 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.649 00:09:26.649 real 0m32.837s 00:09:26.649 user 0m43.826s 00:09:26.649 sys 0m11.625s 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.649 ************************************ 00:09:26.649 END TEST nvmf_zcopy 00:09:26.649 ************************************ 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.649 ************************************ 00:09:26.649 START TEST nvmf_nmic 00:09:26.649 ************************************ 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:26.649 * Looking for test storage... 00:09:26.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.649 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:26.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.649 --rc genhtml_branch_coverage=1 00:09:26.649 --rc genhtml_function_coverage=1 00:09:26.649 --rc genhtml_legend=1 00:09:26.649 --rc geninfo_all_blocks=1 00:09:26.649 --rc geninfo_unexecuted_blocks=1 00:09:26.649 00:09:26.649 ' 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:26.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.650 --rc genhtml_branch_coverage=1 00:09:26.650 --rc genhtml_function_coverage=1 00:09:26.650 --rc genhtml_legend=1 00:09:26.650 --rc geninfo_all_blocks=1 00:09:26.650 --rc geninfo_unexecuted_blocks=1 00:09:26.650 00:09:26.650 ' 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:26.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.650 --rc genhtml_branch_coverage=1 00:09:26.650 --rc genhtml_function_coverage=1 00:09:26.650 --rc genhtml_legend=1 00:09:26.650 --rc geninfo_all_blocks=1 00:09:26.650 --rc geninfo_unexecuted_blocks=1 00:09:26.650 00:09:26.650 ' 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:26.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.650 --rc genhtml_branch_coverage=1 00:09:26.650 --rc genhtml_function_coverage=1 00:09:26.650 --rc genhtml_legend=1 00:09:26.650 --rc geninfo_all_blocks=1 00:09:26.650 --rc geninfo_unexecuted_blocks=1 00:09:26.650 00:09:26.650 ' 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:26.650 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.221 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:33.222 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:33.222 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:33.222 Found net devices under 0000:86:00.0: cvl_0_0 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:33.222 Found net devices under 0000:86:00.1: cvl_0_1 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:09:33.222 00:09:33.222 --- 10.0.0.2 ping statistics --- 00:09:33.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.222 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:09:33.222 00:09:33.222 --- 10.0.0.1 ping statistics --- 00:09:33.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.222 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=303136 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 303136 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 303136 ']' 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.222 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.222 [2024-10-08 18:17:25.979662] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:09:33.222 [2024-10-08 18:17:25.979708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.222 [2024-10-08 18:17:26.051267] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.222 [2024-10-08 18:17:26.127887] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.222 [2024-10-08 18:17:26.127940] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.222 [2024-10-08 18:17:26.127950] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.223 [2024-10-08 18:17:26.127956] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.223 [2024-10-08 18:17:26.127962] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.223 [2024-10-08 18:17:26.129588] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.223 [2024-10-08 18:17:26.129696] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.223 [2024-10-08 18:17:26.129801] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.223 [2024-10-08 18:17:26.129802] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.792 [2024-10-08 18:17:26.856382] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.792 Malloc0 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.792 [2024-10-08 18:17:26.908117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:33.792 test case1: single bdev can't be used in multiple subsystems 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.792 [2024-10-08 18:17:26.932000] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:33.792 [2024-10-08 18:17:26.932021] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:33.792 [2024-10-08 18:17:26.932029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 request: 00:09:33.792 { 00:09:33.792 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:33.792 "namespace": { 00:09:33.792 "bdev_name": "Malloc0", 00:09:33.792 "no_auto_visible": false 00:09:33.792 }, 00:09:33.792 "method": "nvmf_subsystem_add_ns", 00:09:33.792 "req_id": 1 00:09:33.792 } 00:09:33.792 Got JSON-RPC error response 00:09:33.792 response: 00:09:33.792 { 00:09:33.792 "code": -32602, 00:09:33.792 "message": "Invalid parameters" 00:09:33.792 } 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:33.792 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:33.793 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:33.793 Adding namespace failed - expected result. 00:09:33.793 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:33.793 test case2: host connect to nvmf target in multiple paths 00:09:33.793 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:33.793 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.793 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.793 [2024-10-08 18:17:26.944152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:33.793 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.793 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:35.171 18:17:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:36.113 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:36.113 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:36.113 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:36.113 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:36.113 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:38.018 18:17:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:38.018 18:17:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:38.018 18:17:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:38.018 18:17:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:38.018 18:17:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:38.018 18:17:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:38.018 18:17:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:38.018 [global] 00:09:38.018 thread=1 00:09:38.018 invalidate=1 00:09:38.018 rw=write 00:09:38.018 time_based=1 00:09:38.018 runtime=1 00:09:38.018 ioengine=libaio 00:09:38.018 direct=1 00:09:38.018 bs=4096 00:09:38.018 iodepth=1 00:09:38.018 norandommap=0 00:09:38.018 numjobs=1 00:09:38.018 00:09:38.018 verify_dump=1 00:09:38.018 verify_backlog=512 00:09:38.018 verify_state_save=0 00:09:38.018 do_verify=1 00:09:38.018 verify=crc32c-intel 00:09:38.018 [job0] 00:09:38.018 filename=/dev/nvme0n1 00:09:38.276 Could not set queue depth (nvme0n1) 00:09:38.533 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.533 fio-3.35 00:09:38.533 Starting 1 thread 00:09:39.468 00:09:39.468 job0: (groupid=0, jobs=1): err= 0: pid=304228: Tue Oct 8 18:17:32 2024 00:09:39.468 read: IOPS=2027, BW=8110KiB/s (8305kB/s)(8240KiB/1016msec) 00:09:39.468 slat (nsec): min=5913, max=29339, avg=7238.52, stdev=1387.98 00:09:39.468 clat (usec): min=158, max=41376, avg=293.26, stdev=1800.78 00:09:39.468 lat (usec): min=165, max=41384, avg=300.50, stdev=1800.82 00:09:39.468 clat percentiles (usec): 00:09:39.468 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:09:39.468 | 30.00th=[ 184], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 223], 00:09:39.468 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 265], 95.00th=[ 269], 00:09:39.468 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[41157], 99.95th=[41157], 00:09:39.468 | 99.99th=[41157] 00:09:39.468 write: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(10.0MiB/1016msec); 0 zone resets 00:09:39.468 slat (nsec): min=8852, max=44870, avg=10226.73, stdev=1445.37 00:09:39.468 clat (usec): min=108, max=400, avg=140.98, stdev=27.24 00:09:39.468 lat (usec): min=121, max=445, avg=151.20, stdev=27.59 00:09:39.468 clat percentiles (usec): 00:09:39.468 | 1.00th=[ 118], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 125], 00:09:39.468 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 130], 60.00th=[ 133], 00:09:39.468 | 70.00th=[ 137], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 186], 00:09:39.468 | 99.00th=[ 249], 99.50th=[ 258], 99.90th=[ 273], 99.95th=[ 338], 00:09:39.468 | 99.99th=[ 400] 00:09:39.468 bw ( KiB/s): min= 8192, max=12263, per=100.00%, avg=10227.50, stdev=2878.63, samples=2 00:09:39.468 iops : min= 2048, max= 3065, avg=2556.50, stdev=719.13, samples=2 00:09:39.468 lat (usec) : 250=91.65%, 500=8.27% 00:09:39.468 lat (msec) : 50=0.09% 00:09:39.468 cpu : usr=2.27%, sys=4.04%, ctx=4620, majf=0, minf=1 00:09:39.468 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.468 issued rwts: total=2060,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.468 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.468 00:09:39.468 Run status group 0 (all jobs): 00:09:39.468 READ: bw=8110KiB/s (8305kB/s), 8110KiB/s-8110KiB/s (8305kB/s-8305kB/s), io=8240KiB (8438kB), run=1016-1016msec 00:09:39.468 WRITE: bw=9.84MiB/s (10.3MB/s), 9.84MiB/s-9.84MiB/s (10.3MB/s-10.3MB/s), io=10.0MiB (10.5MB), run=1016-1016msec 00:09:39.468 00:09:39.468 Disk stats (read/write): 00:09:39.468 nvme0n1: ios=2101/2560, merge=0/0, ticks=700/349, in_queue=1049, util=95.59% 00:09:39.468 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:39.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:39.728 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:39.728 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:39.728 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:39.728 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.728 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:39.728 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.728 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:39.728 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:39.728 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:39.728 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:39.728 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:39.728 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:39.728 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:39.728 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:39.728 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:39.728 rmmod nvme_tcp 00:09:39.728 rmmod nvme_fabrics 00:09:39.728 rmmod nvme_keyring 00:09:39.728 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:39.728 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:39.728 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:39.728 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 303136 ']' 00:09:39.728 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 303136 00:09:39.728 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 303136 ']' 00:09:39.728 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 303136 00:09:39.728 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:39.728 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:39.728 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 303136 00:09:39.987 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:39.987 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:39.987 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 303136' 00:09:39.987 killing process with pid 303136 00:09:39.987 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 303136 00:09:39.987 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 303136 00:09:39.987 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:39.987 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:39.987 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:39.987 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:39.987 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:39.987 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:39.987 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:39.988 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:39.988 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:39.988 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.988 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.988 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:42.525 00:09:42.525 real 0m15.698s 00:09:42.525 user 0m35.935s 00:09:42.525 sys 0m5.410s 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.525 ************************************ 00:09:42.525 END TEST nvmf_nmic 00:09:42.525 ************************************ 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:42.525 ************************************ 00:09:42.525 START TEST nvmf_fio_target 00:09:42.525 ************************************ 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:42.525 * Looking for test storage... 00:09:42.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:42.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.525 --rc genhtml_branch_coverage=1 00:09:42.525 --rc genhtml_function_coverage=1 00:09:42.525 --rc genhtml_legend=1 00:09:42.525 --rc geninfo_all_blocks=1 00:09:42.525 --rc geninfo_unexecuted_blocks=1 00:09:42.525 00:09:42.525 ' 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:42.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.525 --rc genhtml_branch_coverage=1 00:09:42.525 --rc genhtml_function_coverage=1 00:09:42.525 --rc genhtml_legend=1 00:09:42.525 --rc geninfo_all_blocks=1 00:09:42.525 --rc geninfo_unexecuted_blocks=1 00:09:42.525 00:09:42.525 ' 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:42.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.525 --rc genhtml_branch_coverage=1 00:09:42.525 --rc genhtml_function_coverage=1 00:09:42.525 --rc genhtml_legend=1 00:09:42.525 --rc geninfo_all_blocks=1 00:09:42.525 --rc geninfo_unexecuted_blocks=1 00:09:42.525 00:09:42.525 ' 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:42.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.525 --rc genhtml_branch_coverage=1 00:09:42.525 --rc genhtml_function_coverage=1 00:09:42.525 --rc genhtml_legend=1 00:09:42.525 --rc geninfo_all_blocks=1 00:09:42.525 --rc geninfo_unexecuted_blocks=1 00:09:42.525 00:09:42.525 ' 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.525 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:42.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:42.526 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:49.098 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:49.098 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:49.098 Found net devices under 0000:86:00.0: cvl_0_0 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:49.098 Found net devices under 0000:86:00.1: cvl_0_1 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:49.098 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:49.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:09:49.099 00:09:49.099 --- 10.0.0.2 ping statistics --- 00:09:49.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.099 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:09:49.099 00:09:49.099 --- 10.0.0.1 ping statistics --- 00:09:49.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.099 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=308006 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 308006 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 308006 ']' 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.099 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.099 [2024-10-08 18:17:41.715127] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:09:49.099 [2024-10-08 18:17:41.715170] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.099 [2024-10-08 18:17:41.787561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:49.099 [2024-10-08 18:17:41.863431] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.099 [2024-10-08 18:17:41.863467] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.099 [2024-10-08 18:17:41.863474] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.099 [2024-10-08 18:17:41.863481] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.099 [2024-10-08 18:17:41.863505] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.099 [2024-10-08 18:17:41.865100] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.099 [2024-10-08 18:17:41.865219] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.099 [2024-10-08 18:17:41.865327] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.099 [2024-10-08 18:17:41.865329] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.358 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.358 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:49.358 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:49.358 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:49.358 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.358 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.358 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:49.617 [2024-10-08 18:17:42.769819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.617 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:49.876 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:49.876 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.134 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:50.134 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.134 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:50.134 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.393 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:50.393 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:50.652 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.911 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:50.911 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.170 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:51.170 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.170 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:51.170 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:51.428 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:51.686 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:51.686 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:51.945 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:51.945 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:52.204 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.204 [2024-10-08 18:17:45.467726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.204 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:52.462 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:52.721 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:54.098 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:54.098 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:54.098 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:54.098 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:54.098 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:54.098 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:56.003 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:56.003 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:56.003 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:56.003 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:56.003 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:56.003 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:56.003 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:56.003 [global] 00:09:56.003 thread=1 00:09:56.003 invalidate=1 00:09:56.003 rw=write 00:09:56.003 time_based=1 00:09:56.003 runtime=1 00:09:56.003 ioengine=libaio 00:09:56.003 direct=1 00:09:56.003 bs=4096 00:09:56.003 iodepth=1 00:09:56.003 norandommap=0 00:09:56.003 numjobs=1 00:09:56.003 00:09:56.003 verify_dump=1 00:09:56.003 verify_backlog=512 00:09:56.003 verify_state_save=0 00:09:56.003 do_verify=1 00:09:56.003 verify=crc32c-intel 00:09:56.003 [job0] 00:09:56.003 filename=/dev/nvme0n1 00:09:56.003 [job1] 00:09:56.003 filename=/dev/nvme0n2 00:09:56.003 [job2] 00:09:56.003 filename=/dev/nvme0n3 00:09:56.003 [job3] 00:09:56.003 filename=/dev/nvme0n4 00:09:56.003 Could not set queue depth (nvme0n1) 00:09:56.003 Could not set queue depth (nvme0n2) 00:09:56.003 Could not set queue depth (nvme0n3) 00:09:56.003 Could not set queue depth (nvme0n4) 00:09:56.262 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.262 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.262 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.262 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.262 fio-3.35 00:09:56.262 Starting 4 threads 00:09:57.640 00:09:57.640 job0: (groupid=0, jobs=1): err= 0: pid=309486: Tue Oct 8 18:17:50 2024 00:09:57.640 read: IOPS=309, BW=1238KiB/s (1268kB/s)(1244KiB/1005msec) 00:09:57.640 slat (nsec): min=7108, max=23219, avg=8812.94, stdev=2133.64 00:09:57.640 clat (usec): min=209, max=41269, avg=2797.18, stdev=9755.84 00:09:57.640 lat (usec): min=218, max=41278, avg=2805.99, stdev=9756.36 00:09:57.640 clat percentiles (usec): 00:09:57.640 | 1.00th=[ 255], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:09:57.640 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 322], 00:09:57.640 | 70.00th=[ 343], 80.00th=[ 367], 90.00th=[ 412], 95.00th=[41157], 00:09:57.640 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:57.640 | 99.99th=[41157] 00:09:57.640 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:09:57.640 slat (nsec): min=10187, max=37686, avg=11551.28, stdev=1983.45 00:09:57.640 clat (usec): min=210, max=362, avg=240.63, stdev= 7.11 00:09:57.640 lat (usec): min=228, max=374, avg=252.18, stdev= 7.01 00:09:57.640 clat percentiles (usec): 00:09:57.640 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 239], 00:09:57.640 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 241], 60.00th=[ 241], 00:09:57.640 | 70.00th=[ 243], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 247], 00:09:57.640 | 99.00th=[ 253], 99.50th=[ 260], 99.90th=[ 363], 99.95th=[ 363], 00:09:57.640 | 99.99th=[ 363] 00:09:57.640 bw ( KiB/s): min= 4096, max= 4096, per=16.20%, avg=4096.00, stdev= 0.00, samples=1 00:09:57.640 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:57.640 lat (usec) : 250=60.87%, 500=36.82% 00:09:57.640 lat (msec) : 50=2.31% 00:09:57.640 cpu : usr=0.80%, sys=1.20%, ctx=824, majf=0, minf=1 00:09:57.640 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.640 issued rwts: total=311,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.640 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.640 job1: (groupid=0, jobs=1): err= 0: pid=309502: Tue Oct 8 18:17:50 2024 00:09:57.640 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:57.640 slat (nsec): min=4359, max=25259, avg=7898.91, stdev=1202.06 00:09:57.640 clat (usec): min=167, max=41131, avg=257.75, stdev=1084.51 00:09:57.640 lat (usec): min=176, max=41135, avg=265.65, stdev=1084.51 00:09:57.640 clat percentiles (usec): 00:09:57.640 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:09:57.640 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:09:57.640 | 70.00th=[ 229], 80.00th=[ 243], 90.00th=[ 265], 95.00th=[ 277], 00:09:57.640 | 99.00th=[ 310], 99.50th=[ 355], 99.90th=[ 1139], 99.95th=[27395], 00:09:57.640 | 99.99th=[41157] 00:09:57.640 write: IOPS=2368, BW=9475KiB/s (9702kB/s)(9484KiB/1001msec); 0 zone resets 00:09:57.640 slat (nsec): min=3272, max=32496, avg=10430.72, stdev=2341.71 00:09:57.640 clat (usec): min=119, max=457, avg=176.61, stdev=38.65 00:09:57.640 lat (usec): min=123, max=461, avg=187.04, stdev=38.77 00:09:57.640 clat percentiles (usec): 00:09:57.640 | 1.00th=[ 128], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 147], 00:09:57.640 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 172], 00:09:57.641 | 70.00th=[ 184], 80.00th=[ 208], 90.00th=[ 245], 95.00th=[ 255], 00:09:57.641 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 347], 99.95th=[ 383], 00:09:57.641 | 99.99th=[ 457] 00:09:57.641 bw ( KiB/s): min= 8192, max= 8192, per=32.40%, avg=8192.00, stdev= 0.00, samples=1 00:09:57.641 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:57.641 lat (usec) : 250=88.23%, 500=11.65% 00:09:57.641 lat (msec) : 2=0.07%, 50=0.05% 00:09:57.641 cpu : usr=4.20%, sys=5.90%, ctx=4419, majf=0, minf=1 00:09:57.641 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.641 issued rwts: total=2048,2371,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.641 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.641 job2: (groupid=0, jobs=1): err= 0: pid=309523: Tue Oct 8 18:17:50 2024 00:09:57.641 read: IOPS=513, BW=2053KiB/s (2102kB/s)(2100KiB/1023msec) 00:09:57.641 slat (nsec): min=7152, max=23080, avg=8564.12, stdev=1482.56 00:09:57.641 clat (usec): min=190, max=41421, avg=1448.79, stdev=6793.42 00:09:57.641 lat (usec): min=198, max=41429, avg=1457.36, stdev=6794.27 00:09:57.641 clat percentiles (usec): 00:09:57.641 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 227], 00:09:57.641 | 30.00th=[ 260], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:09:57.641 | 70.00th=[ 302], 80.00th=[ 326], 90.00th=[ 371], 95.00th=[ 461], 00:09:57.641 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:09:57.641 | 99.99th=[41681] 00:09:57.641 write: IOPS=1000, BW=4004KiB/s (4100kB/s)(4096KiB/1023msec); 0 zone resets 00:09:57.641 slat (nsec): min=10735, max=41691, avg=12946.51, stdev=2427.43 00:09:57.641 clat (usec): min=123, max=371, avg=233.71, stdev=38.29 00:09:57.641 lat (usec): min=134, max=382, avg=246.66, stdev=38.41 00:09:57.641 clat percentiles (usec): 00:09:57.641 | 1.00th=[ 139], 5.00th=[ 155], 10.00th=[ 178], 20.00th=[ 231], 00:09:57.641 | 30.00th=[ 235], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 241], 00:09:57.641 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 258], 00:09:57.641 | 99.00th=[ 367], 99.50th=[ 371], 99.90th=[ 371], 99.95th=[ 371], 00:09:57.641 | 99.99th=[ 371] 00:09:57.641 bw ( KiB/s): min= 8192, max= 8192, per=32.40%, avg=8192.00, stdev= 0.00, samples=1 00:09:57.641 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:57.641 lat (usec) : 250=69.72%, 500=29.05%, 750=0.26% 00:09:57.641 lat (msec) : 50=0.97% 00:09:57.641 cpu : usr=1.08%, sys=2.74%, ctx=1551, majf=0, minf=1 00:09:57.641 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.641 issued rwts: total=525,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.641 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.641 job3: (groupid=0, jobs=1): err= 0: pid=309530: Tue Oct 8 18:17:50 2024 00:09:57.641 read: IOPS=2387, BW=9550KiB/s (9780kB/s)(9560KiB/1001msec) 00:09:57.641 slat (nsec): min=6742, max=30679, avg=7708.41, stdev=1139.39 00:09:57.641 clat (usec): min=166, max=5113, avg=225.58, stdev=102.08 00:09:57.641 lat (usec): min=174, max=5121, avg=233.28, stdev=102.12 00:09:57.641 clat percentiles (usec): 00:09:57.641 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 208], 00:09:57.641 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 227], 00:09:57.641 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 265], 00:09:57.641 | 99.00th=[ 285], 99.50th=[ 289], 99.90th=[ 322], 99.95th=[ 330], 00:09:57.641 | 99.99th=[ 5145] 00:09:57.641 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:57.641 slat (nsec): min=9970, max=37747, avg=11267.43, stdev=1290.92 00:09:57.641 clat (usec): min=120, max=298, avg=156.04, stdev=19.05 00:09:57.641 lat (usec): min=131, max=309, avg=167.30, stdev=19.24 00:09:57.641 clat percentiles (usec): 00:09:57.641 | 1.00th=[ 128], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 145], 00:09:57.641 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 157], 00:09:57.641 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 172], 95.00th=[ 184], 00:09:57.641 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 260], 99.95th=[ 281], 00:09:57.641 | 99.99th=[ 297] 00:09:57.641 bw ( KiB/s): min=11416, max=11416, per=45.15%, avg=11416.00, stdev= 0.00, samples=1 00:09:57.641 iops : min= 2854, max= 2854, avg=2854.00, stdev= 0.00, samples=1 00:09:57.641 lat (usec) : 250=95.01%, 500=4.97% 00:09:57.641 lat (msec) : 10=0.02% 00:09:57.641 cpu : usr=2.30%, sys=5.00%, ctx=4952, majf=0, minf=1 00:09:57.641 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.641 issued rwts: total=2390,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.641 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.641 00:09:57.641 Run status group 0 (all jobs): 00:09:57.641 READ: bw=20.1MiB/s (21.1MB/s), 1238KiB/s-9550KiB/s (1268kB/s-9780kB/s), io=20.6MiB (21.6MB), run=1001-1023msec 00:09:57.641 WRITE: bw=24.7MiB/s (25.9MB/s), 2038KiB/s-9.99MiB/s (2087kB/s-10.5MB/s), io=25.3MiB (26.5MB), run=1001-1023msec 00:09:57.641 00:09:57.641 Disk stats (read/write): 00:09:57.641 nvme0n1: ios=357/512, merge=0/0, ticks=723/116, in_queue=839, util=86.47% 00:09:57.641 nvme0n2: ios=1767/2048, merge=0/0, ticks=493/332, in_queue=825, util=90.34% 00:09:57.641 nvme0n3: ios=577/1024, merge=0/0, ticks=948/226, in_queue=1174, util=93.51% 00:09:57.641 nvme0n4: ios=2071/2096, merge=0/0, ticks=1361/318, in_queue=1679, util=93.88% 00:09:57.641 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:57.641 [global] 00:09:57.641 thread=1 00:09:57.641 invalidate=1 00:09:57.641 rw=randwrite 00:09:57.641 time_based=1 00:09:57.641 runtime=1 00:09:57.641 ioengine=libaio 00:09:57.641 direct=1 00:09:57.641 bs=4096 00:09:57.641 iodepth=1 00:09:57.641 norandommap=0 00:09:57.641 numjobs=1 00:09:57.641 00:09:57.641 verify_dump=1 00:09:57.641 verify_backlog=512 00:09:57.641 verify_state_save=0 00:09:57.641 do_verify=1 00:09:57.641 verify=crc32c-intel 00:09:57.641 [job0] 00:09:57.641 filename=/dev/nvme0n1 00:09:57.641 [job1] 00:09:57.641 filename=/dev/nvme0n2 00:09:57.641 [job2] 00:09:57.641 filename=/dev/nvme0n3 00:09:57.641 [job3] 00:09:57.641 filename=/dev/nvme0n4 00:09:57.641 Could not set queue depth (nvme0n1) 00:09:57.641 Could not set queue depth (nvme0n2) 00:09:57.641 Could not set queue depth (nvme0n3) 00:09:57.641 Could not set queue depth (nvme0n4) 00:09:57.900 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.900 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.900 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.900 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.900 fio-3.35 00:09:57.900 Starting 4 threads 00:09:59.327 00:09:59.327 job0: (groupid=0, jobs=1): err= 0: pid=309958: Tue Oct 8 18:17:52 2024 00:09:59.327 read: IOPS=24, BW=99.9KiB/s (102kB/s)(100KiB/1001msec) 00:09:59.327 slat (nsec): min=7309, max=23888, avg=15972.84, stdev=6993.96 00:09:59.327 clat (usec): min=213, max=41986, avg=36143.78, stdev=13537.62 00:09:59.327 lat (usec): min=224, max=41996, avg=36159.76, stdev=13538.08 00:09:59.327 clat percentiles (usec): 00:09:59.327 | 1.00th=[ 215], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[40633], 00:09:59.327 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:59.327 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:59.327 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:59.327 | 99.99th=[42206] 00:09:59.327 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:59.327 slat (nsec): min=9276, max=38158, avg=11104.79, stdev=2605.29 00:09:59.327 clat (usec): min=137, max=644, avg=174.49, stdev=29.52 00:09:59.327 lat (usec): min=147, max=654, avg=185.60, stdev=30.20 00:09:59.327 clat percentiles (usec): 00:09:59.327 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:09:59.327 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:09:59.327 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 200], 00:09:59.327 | 99.00th=[ 235], 99.50th=[ 363], 99.90th=[ 644], 99.95th=[ 644], 00:09:59.327 | 99.99th=[ 644] 00:09:59.327 bw ( KiB/s): min= 4096, max= 4096, per=19.25%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.327 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.327 lat (usec) : 250=95.16%, 500=0.56%, 750=0.19% 00:09:59.327 lat (msec) : 50=4.10% 00:09:59.327 cpu : usr=0.00%, sys=0.80%, ctx=539, majf=0, minf=1 00:09:59.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.327 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.327 job1: (groupid=0, jobs=1): err= 0: pid=309959: Tue Oct 8 18:17:52 2024 00:09:59.327 read: IOPS=1264, BW=5058KiB/s (5179kB/s)(5068KiB/1002msec) 00:09:59.327 slat (nsec): min=6441, max=29785, avg=7624.31, stdev=1864.31 00:09:59.327 clat (usec): min=164, max=41340, avg=579.58, stdev=3776.33 00:09:59.327 lat (usec): min=171, max=41348, avg=587.21, stdev=3776.61 00:09:59.327 clat percentiles (usec): 00:09:59.327 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 188], 00:09:59.327 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 210], 60.00th=[ 253], 00:09:59.327 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 281], 00:09:59.327 | 99.00th=[ 441], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:09:59.327 | 99.99th=[41157] 00:09:59.327 write: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec); 0 zone resets 00:09:59.327 slat (nsec): min=9841, max=37455, avg=11111.16, stdev=1407.18 00:09:59.327 clat (usec): min=119, max=253, avg=152.31, stdev=20.16 00:09:59.327 lat (usec): min=131, max=264, avg=163.43, stdev=20.31 00:09:59.327 clat percentiles (usec): 00:09:59.327 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:09:59.327 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:09:59.327 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 180], 00:09:59.327 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 251], 99.95th=[ 253], 00:09:59.327 | 99.99th=[ 253] 00:09:59.327 bw ( KiB/s): min=12288, max=12288, per=57.75%, avg=12288.00, stdev= 0.00, samples=1 00:09:59.327 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:59.327 lat (usec) : 250=80.49%, 500=19.09%, 750=0.04% 00:09:59.327 lat (msec) : 50=0.39% 00:09:59.327 cpu : usr=1.40%, sys=2.60%, ctx=2807, majf=0, minf=1 00:09:59.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.327 issued rwts: total=1267,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.327 job2: (groupid=0, jobs=1): err= 0: pid=309960: Tue Oct 8 18:17:52 2024 00:09:59.327 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:59.327 slat (nsec): min=6846, max=36203, avg=7743.81, stdev=1069.01 00:09:59.327 clat (usec): min=155, max=326, avg=202.71, stdev=27.73 00:09:59.327 lat (usec): min=163, max=334, avg=210.45, stdev=27.76 00:09:59.327 clat percentiles (usec): 00:09:59.327 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 180], 00:09:59.327 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 206], 00:09:59.327 | 70.00th=[ 219], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 253], 00:09:59.327 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 322], 99.95th=[ 326], 00:09:59.327 | 99.99th=[ 326] 00:09:59.327 write: IOPS=2767, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec); 0 zone resets 00:09:59.327 slat (nsec): min=9570, max=37976, avg=10985.83, stdev=1410.47 00:09:59.327 clat (usec): min=113, max=327, avg=151.51, stdev=26.17 00:09:59.327 lat (usec): min=125, max=338, avg=162.50, stdev=26.10 00:09:59.327 clat percentiles (usec): 00:09:59.327 | 1.00th=[ 121], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 131], 00:09:59.327 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 145], 60.00th=[ 149], 00:09:59.327 | 70.00th=[ 155], 80.00th=[ 167], 90.00th=[ 196], 95.00th=[ 206], 00:09:59.327 | 99.00th=[ 233], 99.50th=[ 247], 99.90th=[ 293], 99.95th=[ 293], 00:09:59.327 | 99.99th=[ 330] 00:09:59.327 bw ( KiB/s): min=11512, max=11512, per=54.10%, avg=11512.00, stdev= 0.00, samples=1 00:09:59.327 iops : min= 2878, max= 2878, avg=2878.00, stdev= 0.00, samples=1 00:09:59.327 lat (usec) : 250=97.00%, 500=3.00% 00:09:59.327 cpu : usr=2.90%, sys=4.80%, ctx=5332, majf=0, minf=1 00:09:59.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.327 issued rwts: total=2560,2770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.327 job3: (groupid=0, jobs=1): err= 0: pid=309961: Tue Oct 8 18:17:52 2024 00:09:59.327 read: IOPS=120, BW=483KiB/s (495kB/s)(484KiB/1002msec) 00:09:59.327 slat (nsec): min=6747, max=24066, avg=10380.01, stdev=5958.33 00:09:59.327 clat (usec): min=232, max=42032, avg=7358.28, stdev=15537.33 00:09:59.327 lat (usec): min=240, max=42055, avg=7368.66, stdev=15543.01 00:09:59.327 clat percentiles (usec): 00:09:59.327 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 255], 00:09:59.327 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 273], 00:09:59.327 | 70.00th=[ 285], 80.00th=[ 347], 90.00th=[41157], 95.00th=[41157], 00:09:59.327 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:59.327 | 99.99th=[42206] 00:09:59.327 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:09:59.327 slat (nsec): min=8924, max=39167, avg=10218.50, stdev=1691.42 00:09:59.327 clat (usec): min=127, max=332, avg=201.11, stdev=26.96 00:09:59.327 lat (usec): min=137, max=342, avg=211.33, stdev=27.22 00:09:59.327 clat percentiles (usec): 00:09:59.327 | 1.00th=[ 139], 5.00th=[ 161], 10.00th=[ 174], 20.00th=[ 184], 00:09:59.327 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:09:59.327 | 70.00th=[ 208], 80.00th=[ 219], 90.00th=[ 243], 95.00th=[ 245], 00:09:59.327 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 334], 99.95th=[ 334], 00:09:59.327 | 99.99th=[ 334] 00:09:59.327 bw ( KiB/s): min= 4096, max= 4096, per=19.25%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.327 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.327 lat (usec) : 250=81.52%, 500=15.17% 00:09:59.327 lat (msec) : 50=3.32% 00:09:59.327 cpu : usr=0.30%, sys=0.60%, ctx=633, majf=0, minf=2 00:09:59.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.327 issued rwts: total=121,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.327 00:09:59.327 Run status group 0 (all jobs): 00:09:59.327 READ: bw=15.5MiB/s (16.2MB/s), 99.9KiB/s-9.99MiB/s (102kB/s-10.5MB/s), io=15.5MiB (16.3MB), run=1001-1002msec 00:09:59.327 WRITE: bw=20.8MiB/s (21.8MB/s), 2044KiB/s-10.8MiB/s (2093kB/s-11.3MB/s), io=20.8MiB (21.8MB), run=1001-1002msec 00:09:59.327 00:09:59.327 Disk stats (read/write): 00:09:59.327 nvme0n1: ios=47/512, merge=0/0, ticks=1413/89, in_queue=1502, util=96.99% 00:09:59.327 nvme0n2: ios=1269/1536, merge=0/0, ticks=890/217, in_queue=1107, util=96.92% 00:09:59.327 nvme0n3: ios=2085/2130, merge=0/0, ticks=600/328, in_queue=928, util=96.86% 00:09:59.327 nvme0n4: ios=116/512, merge=0/0, ticks=684/100, in_queue=784, util=89.08% 00:09:59.327 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:59.327 [global] 00:09:59.327 thread=1 00:09:59.327 invalidate=1 00:09:59.327 rw=write 00:09:59.328 time_based=1 00:09:59.328 runtime=1 00:09:59.328 ioengine=libaio 00:09:59.328 direct=1 00:09:59.328 bs=4096 00:09:59.328 iodepth=128 00:09:59.328 norandommap=0 00:09:59.328 numjobs=1 00:09:59.328 00:09:59.328 verify_dump=1 00:09:59.328 verify_backlog=512 00:09:59.328 verify_state_save=0 00:09:59.328 do_verify=1 00:09:59.328 verify=crc32c-intel 00:09:59.328 [job0] 00:09:59.328 filename=/dev/nvme0n1 00:09:59.328 [job1] 00:09:59.328 filename=/dev/nvme0n2 00:09:59.328 [job2] 00:09:59.328 filename=/dev/nvme0n3 00:09:59.328 [job3] 00:09:59.328 filename=/dev/nvme0n4 00:09:59.328 Could not set queue depth (nvme0n1) 00:09:59.328 Could not set queue depth (nvme0n2) 00:09:59.328 Could not set queue depth (nvme0n3) 00:09:59.328 Could not set queue depth (nvme0n4) 00:09:59.641 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.641 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.641 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.641 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.641 fio-3.35 00:09:59.641 Starting 4 threads 00:10:00.578 00:10:00.578 job0: (groupid=0, jobs=1): err= 0: pid=310337: Tue Oct 8 18:17:53 2024 00:10:00.578 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:10:00.578 slat (nsec): min=1413, max=5581.5k, avg=80785.66, stdev=430563.39 00:10:00.578 clat (usec): min=7298, max=16283, avg=10625.25, stdev=1125.20 00:10:00.578 lat (usec): min=7532, max=16294, avg=10706.04, stdev=1148.53 00:10:00.578 clat percentiles (usec): 00:10:00.578 | 1.00th=[ 7963], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:10:00.578 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:10:00.578 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11863], 95.00th=[12256], 00:10:00.578 | 99.00th=[13829], 99.50th=[15926], 99.90th=[16319], 99.95th=[16319], 00:10:00.578 | 99.99th=[16319] 00:10:00.578 write: IOPS=5597, BW=21.9MiB/s (22.9MB/s)(21.9MiB/1002msec); 0 zone resets 00:10:00.578 slat (nsec): min=1976, max=35961k, avg=99498.56, stdev=894807.36 00:10:00.578 clat (usec): min=423, max=90659, avg=12930.41, stdev=9184.25 00:10:00.578 lat (usec): min=3328, max=90689, avg=13029.90, stdev=9271.51 00:10:00.578 clat percentiles (usec): 00:10:00.578 | 1.00th=[ 6915], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10159], 00:10:00.578 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10552], 00:10:00.578 | 70.00th=[10683], 80.00th=[10814], 90.00th=[13173], 95.00th=[31327], 00:10:00.578 | 99.00th=[56361], 99.50th=[56361], 99.90th=[56361], 99.95th=[65274], 00:10:00.578 | 99.99th=[90702] 00:10:00.578 bw ( KiB/s): min=20136, max=23712, per=31.30%, avg=21924.00, stdev=2528.61, samples=2 00:10:00.578 iops : min= 5034, max= 5928, avg=5481.00, stdev=632.15, samples=2 00:10:00.578 lat (usec) : 500=0.01% 00:10:00.578 lat (msec) : 4=0.39%, 10=15.98%, 20=79.17%, 50=2.68%, 100=1.77% 00:10:00.578 cpu : usr=4.60%, sys=5.69%, ctx=452, majf=0, minf=1 00:10:00.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:00.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.578 issued rwts: total=5120,5609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.578 job1: (groupid=0, jobs=1): err= 0: pid=310338: Tue Oct 8 18:17:53 2024 00:10:00.578 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:10:00.578 slat (nsec): min=1436, max=19677k, avg=128412.85, stdev=996445.30 00:10:00.578 clat (usec): min=5778, max=39633, avg=15506.40, stdev=5215.30 00:10:00.578 lat (usec): min=5788, max=39660, avg=15634.81, stdev=5295.09 00:10:00.578 clat percentiles (usec): 00:10:00.578 | 1.00th=[ 6325], 5.00th=[10814], 10.00th=[11469], 20.00th=[11731], 00:10:00.578 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12780], 60.00th=[15008], 00:10:00.578 | 70.00th=[15664], 80.00th=[20317], 90.00th=[23462], 95.00th=[27657], 00:10:00.578 | 99.00th=[30278], 99.50th=[30278], 99.90th=[31589], 99.95th=[32113], 00:10:00.578 | 99.99th=[39584] 00:10:00.578 write: IOPS=2830, BW=11.1MiB/s (11.6MB/s)(11.2MiB/1011msec); 0 zone resets 00:10:00.578 slat (usec): min=2, max=16464, avg=229.63, stdev=1250.52 00:10:00.578 clat (msec): min=3, max=129, avg=30.91, stdev=27.05 00:10:00.578 lat (msec): min=3, max=129, avg=31.14, stdev=27.22 00:10:00.578 clat percentiles (msec): 00:10:00.578 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 13], 00:10:00.578 | 30.00th=[ 17], 40.00th=[ 20], 50.00th=[ 22], 60.00th=[ 22], 00:10:00.578 | 70.00th=[ 28], 80.00th=[ 51], 90.00th=[ 70], 95.00th=[ 99], 00:10:00.578 | 99.00th=[ 120], 99.50th=[ 128], 99.90th=[ 130], 99.95th=[ 130], 00:10:00.578 | 99.99th=[ 130] 00:10:00.578 bw ( KiB/s): min= 9216, max=12656, per=15.61%, avg=10936.00, stdev=2432.45, samples=2 00:10:00.578 iops : min= 2304, max= 3164, avg=2734.00, stdev=608.11, samples=2 00:10:00.578 lat (msec) : 4=0.22%, 10=6.77%, 20=53.89%, 50=28.55%, 100=8.13% 00:10:00.578 lat (msec) : 250=2.43% 00:10:00.578 cpu : usr=1.88%, sys=4.16%, ctx=272, majf=0, minf=1 00:10:00.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:00.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.578 issued rwts: total=2560,2862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.578 job2: (groupid=0, jobs=1): err= 0: pid=310339: Tue Oct 8 18:17:53 2024 00:10:00.578 read: IOPS=3543, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1007msec) 00:10:00.578 slat (nsec): min=1524, max=20206k, avg=141375.00, stdev=1073842.86 00:10:00.578 clat (msec): min=3, max=100, avg=16.25, stdev=11.11 00:10:00.578 lat (msec): min=6, max=100, avg=16.39, stdev=11.23 00:10:00.578 clat percentiles (msec): 00:10:00.578 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:10:00.578 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 14], 00:10:00.578 | 70.00th=[ 15], 80.00th=[ 20], 90.00th=[ 24], 95.00th=[ 30], 00:10:00.578 | 99.00th=[ 79], 99.50th=[ 89], 99.90th=[ 101], 99.95th=[ 101], 00:10:00.578 | 99.99th=[ 101] 00:10:00.578 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:10:00.578 slat (usec): min=2, max=18654, avg=131.79, stdev=892.14 00:10:00.578 clat (msec): min=3, max=100, avg=19.44, stdev=14.31 00:10:00.578 lat (msec): min=3, max=100, avg=19.57, stdev=14.40 00:10:00.578 clat percentiles (msec): 00:10:00.578 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:10:00.578 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 16], 60.00th=[ 20], 00:10:00.578 | 70.00th=[ 21], 80.00th=[ 22], 90.00th=[ 39], 95.00th=[ 54], 00:10:00.579 | 99.00th=[ 82], 99.50th=[ 91], 99.90th=[ 99], 99.95th=[ 101], 00:10:00.579 | 99.99th=[ 101] 00:10:00.579 bw ( KiB/s): min=13576, max=15096, per=20.47%, avg=14336.00, stdev=1074.80, samples=2 00:10:00.579 iops : min= 3394, max= 3774, avg=3584.00, stdev=268.70, samples=2 00:10:00.579 lat (msec) : 4=0.27%, 10=9.48%, 20=63.26%, 50=23.01%, 100=3.89% 00:10:00.579 lat (msec) : 250=0.10% 00:10:00.579 cpu : usr=2.19%, sys=5.57%, ctx=310, majf=0, minf=1 00:10:00.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:00.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.579 issued rwts: total=3568,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.579 job3: (groupid=0, jobs=1): err= 0: pid=310340: Tue Oct 8 18:17:53 2024 00:10:00.579 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:10:00.579 slat (nsec): min=1470, max=5391.4k, avg=88942.88, stdev=488608.81 00:10:00.579 clat (usec): min=4339, max=17566, avg=11487.06, stdev=1658.72 00:10:00.579 lat (usec): min=4348, max=17576, avg=11576.00, stdev=1699.95 00:10:00.579 clat percentiles (usec): 00:10:00.579 | 1.00th=[ 7177], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[10683], 00:10:00.579 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:10:00.579 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13304], 95.00th=[14222], 00:10:00.579 | 99.00th=[16319], 99.50th=[16450], 99.90th=[17433], 99.95th=[17433], 00:10:00.579 | 99.99th=[17695] 00:10:00.579 write: IOPS=5638, BW=22.0MiB/s (23.1MB/s)(22.1MiB/1002msec); 0 zone resets 00:10:00.579 slat (usec): min=2, max=7851, avg=81.23, stdev=427.86 00:10:00.579 clat (usec): min=255, max=20265, avg=11034.88, stdev=1867.88 00:10:00.579 lat (usec): min=422, max=20269, avg=11116.11, stdev=1905.09 00:10:00.579 clat percentiles (usec): 00:10:00.579 | 1.00th=[ 4359], 5.00th=[ 7832], 10.00th=[ 8979], 20.00th=[ 9765], 00:10:00.579 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:10:00.579 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12125], 95.00th=[13173], 00:10:00.579 | 99.00th=[15926], 99.50th=[16581], 99.90th=[20317], 99.95th=[20317], 00:10:00.579 | 99.99th=[20317] 00:10:00.579 bw ( KiB/s): min=20480, max=24576, per=32.16%, avg=22528.00, stdev=2896.31, samples=2 00:10:00.579 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:10:00.579 lat (usec) : 500=0.03% 00:10:00.579 lat (msec) : 2=0.31%, 4=0.06%, 10=18.79%, 20=80.76%, 50=0.05% 00:10:00.579 cpu : usr=3.90%, sys=8.29%, ctx=558, majf=0, minf=2 00:10:00.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:00.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.579 issued rwts: total=5632,5650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.579 00:10:00.579 Run status group 0 (all jobs): 00:10:00.579 READ: bw=65.2MiB/s (68.4MB/s), 9.89MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=65.9MiB (69.1MB), run=1002-1011msec 00:10:00.579 WRITE: bw=68.4MiB/s (71.7MB/s), 11.1MiB/s-22.0MiB/s (11.6MB/s-23.1MB/s), io=69.2MiB (72.5MB), run=1002-1011msec 00:10:00.579 00:10:00.579 Disk stats (read/write): 00:10:00.579 nvme0n1: ios=4356/4608, merge=0/0, ticks=15058/22420, in_queue=37478, util=87.07% 00:10:00.579 nvme0n2: ios=2088/2343, merge=0/0, ticks=32078/72886, in_queue=104964, util=96.35% 00:10:00.579 nvme0n3: ios=3119/3151, merge=0/0, ticks=47402/54947, in_queue=102349, util=96.25% 00:10:00.579 nvme0n4: ios=4630/5059, merge=0/0, ticks=26593/27367, in_queue=53960, util=96.12% 00:10:00.579 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:00.863 [global] 00:10:00.863 thread=1 00:10:00.863 invalidate=1 00:10:00.863 rw=randwrite 00:10:00.863 time_based=1 00:10:00.863 runtime=1 00:10:00.863 ioengine=libaio 00:10:00.863 direct=1 00:10:00.863 bs=4096 00:10:00.863 iodepth=128 00:10:00.863 norandommap=0 00:10:00.863 numjobs=1 00:10:00.863 00:10:00.863 verify_dump=1 00:10:00.863 verify_backlog=512 00:10:00.863 verify_state_save=0 00:10:00.863 do_verify=1 00:10:00.863 verify=crc32c-intel 00:10:00.863 [job0] 00:10:00.863 filename=/dev/nvme0n1 00:10:00.863 [job1] 00:10:00.863 filename=/dev/nvme0n2 00:10:00.863 [job2] 00:10:00.863 filename=/dev/nvme0n3 00:10:00.863 [job3] 00:10:00.863 filename=/dev/nvme0n4 00:10:00.863 Could not set queue depth (nvme0n1) 00:10:00.863 Could not set queue depth (nvme0n2) 00:10:00.863 Could not set queue depth (nvme0n3) 00:10:00.863 Could not set queue depth (nvme0n4) 00:10:01.127 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.128 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.128 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.128 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.128 fio-3.35 00:10:01.128 Starting 4 threads 00:10:02.504 00:10:02.504 job0: (groupid=0, jobs=1): err= 0: pid=310716: Tue Oct 8 18:17:55 2024 00:10:02.504 read: IOPS=4775, BW=18.7MiB/s (19.6MB/s)(18.7MiB/1003msec) 00:10:02.504 slat (nsec): min=1368, max=15159k, avg=96754.68, stdev=633842.97 00:10:02.504 clat (usec): min=632, max=39560, avg=11792.71, stdev=4447.84 00:10:02.504 lat (usec): min=2802, max=39571, avg=11889.46, stdev=4493.82 00:10:02.504 clat percentiles (usec): 00:10:02.504 | 1.00th=[ 4080], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10028], 00:10:02.504 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:10:02.504 | 70.00th=[11338], 80.00th=[12387], 90.00th=[15533], 95.00th=[18744], 00:10:02.504 | 99.00th=[34341], 99.50th=[35390], 99.90th=[39584], 99.95th=[39584], 00:10:02.504 | 99.99th=[39584] 00:10:02.504 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:02.504 slat (nsec): min=1943, max=8073.9k, avg=99382.43, stdev=517192.64 00:10:02.504 clat (usec): min=1518, max=55727, avg=13821.91, stdev=9350.79 00:10:02.504 lat (usec): min=1533, max=55739, avg=13921.29, stdev=9411.31 00:10:02.504 clat percentiles (usec): 00:10:02.504 | 1.00th=[ 3261], 5.00th=[ 5276], 10.00th=[ 7439], 20.00th=[ 8848], 00:10:02.504 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:10:02.504 | 70.00th=[11076], 80.00th=[19268], 90.00th=[26084], 95.00th=[33162], 00:10:02.504 | 99.00th=[52167], 99.50th=[53740], 99.90th=[55837], 99.95th=[55837], 00:10:02.504 | 99.99th=[55837] 00:10:02.504 bw ( KiB/s): min=16384, max=24576, per=28.76%, avg=20480.00, stdev=5792.62, samples=2 00:10:02.504 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:10:02.504 lat (usec) : 750=0.01% 00:10:02.504 lat (msec) : 2=0.04%, 4=1.88%, 10=24.37%, 20=61.81%, 50=11.00% 00:10:02.504 lat (msec) : 100=0.90% 00:10:02.504 cpu : usr=2.99%, sys=5.69%, ctx=476, majf=0, minf=1 00:10:02.504 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:02.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.504 issued rwts: total=4790,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.504 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.504 job1: (groupid=0, jobs=1): err= 0: pid=310717: Tue Oct 8 18:17:55 2024 00:10:02.504 read: IOPS=3414, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1008msec) 00:10:02.504 slat (nsec): min=1107, max=18179k, avg=130271.32, stdev=914811.45 00:10:02.504 clat (usec): min=3077, max=51620, avg=16084.29, stdev=9426.76 00:10:02.504 lat (usec): min=4619, max=51678, avg=16214.56, stdev=9491.85 00:10:02.504 clat percentiles (usec): 00:10:02.504 | 1.00th=[ 6652], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10159], 00:10:02.504 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12387], 60.00th=[13304], 00:10:02.504 | 70.00th=[16188], 80.00th=[19792], 90.00th=[28705], 95.00th=[38011], 00:10:02.504 | 99.00th=[50594], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:10:02.504 | 99.99th=[51643] 00:10:02.504 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:10:02.504 slat (usec): min=2, max=27005, avg=136.98, stdev=1018.01 00:10:02.504 clat (usec): min=2750, max=69943, avg=20103.20, stdev=11564.56 00:10:02.504 lat (usec): min=2757, max=69974, avg=20240.18, stdev=11657.92 00:10:02.504 clat percentiles (usec): 00:10:02.504 | 1.00th=[ 4490], 5.00th=[ 6915], 10.00th=[ 8225], 20.00th=[10814], 00:10:02.504 | 30.00th=[11863], 40.00th=[13042], 50.00th=[15401], 60.00th=[21103], 00:10:02.504 | 70.00th=[26084], 80.00th=[31327], 90.00th=[36439], 95.00th=[42730], 00:10:02.504 | 99.00th=[54264], 99.50th=[54264], 99.90th=[54264], 99.95th=[58459], 00:10:02.504 | 99.99th=[69731] 00:10:02.505 bw ( KiB/s): min=12288, max=16384, per=20.13%, avg=14336.00, stdev=2896.31, samples=2 00:10:02.505 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:10:02.505 lat (msec) : 4=0.40%, 10=17.21%, 20=51.11%, 50=29.38%, 100=1.91% 00:10:02.505 cpu : usr=2.88%, sys=4.37%, ctx=326, majf=0, minf=1 00:10:02.505 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:02.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.505 issued rwts: total=3442,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.505 job2: (groupid=0, jobs=1): err= 0: pid=310718: Tue Oct 8 18:17:55 2024 00:10:02.505 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:10:02.505 slat (nsec): min=1372, max=13277k, avg=96413.86, stdev=601334.58 00:10:02.505 clat (usec): min=5941, max=29307, avg=12048.36, stdev=2438.15 00:10:02.505 lat (usec): min=5948, max=29314, avg=12144.77, stdev=2488.98 00:10:02.505 clat percentiles (usec): 00:10:02.505 | 1.00th=[ 7439], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[10683], 00:10:02.505 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:10:02.505 | 70.00th=[12780], 80.00th=[14091], 90.00th=[15533], 95.00th=[16057], 00:10:02.505 | 99.00th=[18220], 99.50th=[20055], 99.90th=[23462], 99.95th=[29230], 00:10:02.505 | 99.99th=[29230] 00:10:02.505 write: IOPS=5579, BW=21.8MiB/s (22.9MB/s)(21.9MiB/1005msec); 0 zone resets 00:10:02.505 slat (usec): min=2, max=12709, avg=84.32, stdev=503.89 00:10:02.505 clat (usec): min=4315, max=26985, avg=11698.04, stdev=2426.12 00:10:02.505 lat (usec): min=4385, max=26993, avg=11782.36, stdev=2465.29 00:10:02.505 clat percentiles (usec): 00:10:02.505 | 1.00th=[ 6259], 5.00th=[ 7767], 10.00th=[ 9241], 20.00th=[10421], 00:10:02.505 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:10:02.505 | 70.00th=[11863], 80.00th=[12649], 90.00th=[14484], 95.00th=[16057], 00:10:02.505 | 99.00th=[20317], 99.50th=[24249], 99.90th=[26870], 99.95th=[26870], 00:10:02.505 | 99.99th=[26870] 00:10:02.505 bw ( KiB/s): min=20480, max=23360, per=30.79%, avg=21920.00, stdev=2036.47, samples=2 00:10:02.505 iops : min= 5120, max= 5840, avg=5480.00, stdev=509.12, samples=2 00:10:02.505 lat (msec) : 10=15.99%, 20=83.11%, 50=0.90% 00:10:02.505 cpu : usr=3.69%, sys=7.07%, ctx=626, majf=0, minf=1 00:10:02.505 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:02.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.505 issued rwts: total=5120,5607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.505 job3: (groupid=0, jobs=1): err= 0: pid=310719: Tue Oct 8 18:17:55 2024 00:10:02.505 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:10:02.505 slat (nsec): min=1205, max=30327k, avg=153628.43, stdev=1203256.91 00:10:02.505 clat (usec): min=4231, max=88747, avg=21196.57, stdev=15946.75 00:10:02.505 lat (usec): min=4237, max=88771, avg=21350.19, stdev=16074.30 00:10:02.505 clat percentiles (usec): 00:10:02.505 | 1.00th=[ 8717], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[11863], 00:10:02.505 | 30.00th=[12256], 40.00th=[13173], 50.00th=[13566], 60.00th=[17171], 00:10:02.505 | 70.00th=[18220], 80.00th=[27657], 90.00th=[47973], 95.00th=[66323], 00:10:02.505 | 99.00th=[73925], 99.50th=[77071], 99.90th=[83362], 99.95th=[85459], 00:10:02.505 | 99.99th=[88605] 00:10:02.505 write: IOPS=3617, BW=14.1MiB/s (14.8MB/s)(14.3MiB/1009msec); 0 zone resets 00:10:02.505 slat (nsec): min=1971, max=13697k, avg=109892.07, stdev=714803.50 00:10:02.505 clat (usec): min=1113, max=61323, avg=14266.92, stdev=7037.86 00:10:02.505 lat (usec): min=1126, max=61333, avg=14376.82, stdev=7112.42 00:10:02.505 clat percentiles (usec): 00:10:02.505 | 1.00th=[ 3392], 5.00th=[ 6521], 10.00th=[ 9110], 20.00th=[10945], 00:10:02.505 | 30.00th=[11338], 40.00th=[11469], 50.00th=[12125], 60.00th=[13829], 00:10:02.505 | 70.00th=[14877], 80.00th=[16909], 90.00th=[22152], 95.00th=[22938], 00:10:02.505 | 99.00th=[49021], 99.50th=[57410], 99.90th=[61080], 99.95th=[61080], 00:10:02.505 | 99.99th=[61080] 00:10:02.505 bw ( KiB/s): min= 8192, max=20480, per=20.13%, avg=14336.00, stdev=8688.93, samples=2 00:10:02.505 iops : min= 2048, max= 5120, avg=3584.00, stdev=2172.23, samples=2 00:10:02.505 lat (msec) : 2=0.03%, 4=0.77%, 10=9.90%, 20=68.11%, 50=16.49% 00:10:02.505 lat (msec) : 100=4.70% 00:10:02.505 cpu : usr=3.37%, sys=4.27%, ctx=260, majf=0, minf=2 00:10:02.505 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:02.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.505 issued rwts: total=3584,3650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.505 00:10:02.505 Run status group 0 (all jobs): 00:10:02.505 READ: bw=65.6MiB/s (68.8MB/s), 13.3MiB/s-19.9MiB/s (14.0MB/s-20.9MB/s), io=66.2MiB (69.4MB), run=1003-1009msec 00:10:02.505 WRITE: bw=69.5MiB/s (72.9MB/s), 13.9MiB/s-21.8MiB/s (14.6MB/s-22.9MB/s), io=70.2MiB (73.6MB), run=1003-1009msec 00:10:02.505 00:10:02.505 Disk stats (read/write): 00:10:02.505 nvme0n1: ios=3963/4096, merge=0/0, ticks=36970/51735, in_queue=88705, util=93.09% 00:10:02.505 nvme0n2: ios=3103/3127, merge=0/0, ticks=35238/37511, in_queue=72749, util=99.29% 00:10:02.505 nvme0n3: ios=4457/4608, merge=0/0, ticks=36799/36550, in_queue=73349, util=98.54% 00:10:02.505 nvme0n4: ios=3072/3104, merge=0/0, ticks=27538/23691, in_queue=51229, util=89.72% 00:10:02.505 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:02.505 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=310945 00:10:02.505 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:02.505 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:02.505 [global] 00:10:02.505 thread=1 00:10:02.505 invalidate=1 00:10:02.505 rw=read 00:10:02.505 time_based=1 00:10:02.505 runtime=10 00:10:02.505 ioengine=libaio 00:10:02.505 direct=1 00:10:02.505 bs=4096 00:10:02.505 iodepth=1 00:10:02.505 norandommap=1 00:10:02.505 numjobs=1 00:10:02.505 00:10:02.505 [job0] 00:10:02.505 filename=/dev/nvme0n1 00:10:02.505 [job1] 00:10:02.505 filename=/dev/nvme0n2 00:10:02.505 [job2] 00:10:02.505 filename=/dev/nvme0n3 00:10:02.505 [job3] 00:10:02.505 filename=/dev/nvme0n4 00:10:02.505 Could not set queue depth (nvme0n1) 00:10:02.505 Could not set queue depth (nvme0n2) 00:10:02.505 Could not set queue depth (nvme0n3) 00:10:02.505 Could not set queue depth (nvme0n4) 00:10:02.505 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.505 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.505 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.505 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.505 fio-3.35 00:10:02.505 Starting 4 threads 00:10:05.791 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:05.791 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=42889216, buflen=4096 00:10:05.791 fio: pid=311094, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:05.791 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:05.791 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:05.791 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:05.791 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=364544, buflen=4096 00:10:05.791 fio: pid=311093, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:05.791 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=311296, buflen=4096 00:10:05.791 fio: pid=311091, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:05.791 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:05.791 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:06.050 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=35479552, buflen=4096 00:10:06.050 fio: pid=311092, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:06.050 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.050 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:06.050 00:10:06.050 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=311091: Tue Oct 8 18:17:59 2024 00:10:06.050 read: IOPS=24, BW=96.9KiB/s (99.2kB/s)(304KiB/3138msec) 00:10:06.050 slat (usec): min=10, max=26802, avg=361.98, stdev=3052.85 00:10:06.050 clat (usec): min=516, max=42961, avg=40639.16, stdev=4683.57 00:10:06.050 lat (usec): min=547, max=68939, avg=41005.64, stdev=5694.77 00:10:06.050 clat percentiles (usec): 00:10:06.050 | 1.00th=[ 519], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:06.050 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:06.050 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:06.050 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:06.050 | 99.99th=[42730] 00:10:06.050 bw ( KiB/s): min= 92, max= 104, per=0.42%, avg=96.67, stdev= 3.93, samples=6 00:10:06.050 iops : min= 23, max= 26, avg=24.17, stdev= 0.98, samples=6 00:10:06.050 lat (usec) : 750=1.30% 00:10:06.050 lat (msec) : 50=97.40% 00:10:06.050 cpu : usr=0.10%, sys=0.00%, ctx=78, majf=0, minf=1 00:10:06.050 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.050 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.050 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.050 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.050 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=311092: Tue Oct 8 18:17:59 2024 00:10:06.050 read: IOPS=2580, BW=10.1MiB/s (10.6MB/s)(33.8MiB/3357msec) 00:10:06.050 slat (usec): min=6, max=15660, avg=10.92, stdev=192.86 00:10:06.050 clat (usec): min=168, max=41007, avg=372.18, stdev=498.88 00:10:06.050 lat (usec): min=176, max=41014, avg=383.10, stdev=534.78 00:10:06.050 clat percentiles (usec): 00:10:06.050 | 1.00th=[ 184], 5.00th=[ 198], 10.00th=[ 208], 20.00th=[ 243], 00:10:06.050 | 30.00th=[ 392], 40.00th=[ 396], 50.00th=[ 400], 60.00th=[ 404], 00:10:06.050 | 70.00th=[ 408], 80.00th=[ 412], 90.00th=[ 416], 95.00th=[ 420], 00:10:06.050 | 99.00th=[ 429], 99.50th=[ 433], 99.90th=[ 519], 99.95th=[ 586], 00:10:06.050 | 99.99th=[41157] 00:10:06.050 bw ( KiB/s): min= 9632, max=11489, per=43.25%, avg=9944.67, stdev=756.58, samples=6 00:10:06.050 iops : min= 2408, max= 2872, avg=2486.00, stdev=189.10, samples=6 00:10:06.050 lat (usec) : 250=20.30%, 500=79.56%, 750=0.08%, 1000=0.01% 00:10:06.050 lat (msec) : 20=0.02%, 50=0.01% 00:10:06.050 cpu : usr=1.13%, sys=4.50%, ctx=8666, majf=0, minf=2 00:10:06.050 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.050 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.051 issued rwts: total=8663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.051 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.051 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=311093: Tue Oct 8 18:17:59 2024 00:10:06.051 read: IOPS=30, BW=121KiB/s (124kB/s)(356KiB/2945msec) 00:10:06.051 slat (nsec): min=6820, max=34585, avg=19206.50, stdev=6647.33 00:10:06.051 clat (usec): min=192, max=42259, avg=32828.36, stdev=16485.12 00:10:06.051 lat (usec): min=201, max=42294, avg=32847.66, stdev=16488.70 00:10:06.051 clat percentiles (usec): 00:10:06.051 | 1.00th=[ 192], 5.00th=[ 233], 10.00th=[ 251], 20.00th=[ 498], 00:10:06.051 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:06.051 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:10:06.051 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:06.051 | 99.99th=[42206] 00:10:06.051 bw ( KiB/s): min= 96, max= 112, per=0.43%, avg=100.80, stdev= 7.16, samples=5 00:10:06.051 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:10:06.051 lat (usec) : 250=8.89%, 500=11.11% 00:10:06.051 lat (msec) : 50=78.89% 00:10:06.051 cpu : usr=0.00%, sys=0.10%, ctx=90, majf=0, minf=2 00:10:06.051 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.051 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.051 issued rwts: total=90,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.051 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.051 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=311094: Tue Oct 8 18:17:59 2024 00:10:06.051 read: IOPS=3845, BW=15.0MiB/s (15.8MB/s)(40.9MiB/2723msec) 00:10:06.051 slat (nsec): min=6387, max=33504, avg=7374.42, stdev=1052.91 00:10:06.051 clat (usec): min=184, max=640, avg=249.61, stdev=14.04 00:10:06.051 lat (usec): min=190, max=673, avg=256.99, stdev=14.06 00:10:06.051 clat percentiles (usec): 00:10:06.051 | 1.00th=[ 219], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:10:06.051 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:10:06.051 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 265], 95.00th=[ 269], 00:10:06.051 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 379], 99.95th=[ 408], 00:10:06.051 | 99.99th=[ 433] 00:10:06.051 bw ( KiB/s): min=15488, max=15552, per=67.49%, avg=15518.40, stdev=22.91, samples=5 00:10:06.051 iops : min= 3872, max= 3888, avg=3879.60, stdev= 5.73, samples=5 00:10:06.051 lat (usec) : 250=51.20%, 500=48.78%, 750=0.01% 00:10:06.051 cpu : usr=0.84%, sys=3.56%, ctx=10472, majf=0, minf=2 00:10:06.051 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.051 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.051 issued rwts: total=10472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.051 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.051 00:10:06.051 Run status group 0 (all jobs): 00:10:06.051 READ: bw=22.5MiB/s (23.5MB/s), 96.9KiB/s-15.0MiB/s (99.2kB/s-15.8MB/s), io=75.4MiB (79.0MB), run=2723-3357msec 00:10:06.051 00:10:06.051 Disk stats (read/write): 00:10:06.051 nvme0n1: ios=75/0, merge=0/0, ticks=3048/0, in_queue=3048, util=94.89% 00:10:06.051 nvme0n2: ios=8700/0, merge=0/0, ticks=4216/0, in_queue=4216, util=98.56% 00:10:06.051 nvme0n3: ios=87/0, merge=0/0, ticks=2841/0, in_queue=2841, util=96.49% 00:10:06.051 nvme0n4: ios=10099/0, merge=0/0, ticks=2490/0, in_queue=2490, util=96.41% 00:10:06.310 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.310 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:06.569 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.569 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:06.827 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.827 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:07.086 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.086 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:07.086 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:07.086 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 310945 00:10:07.086 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:07.086 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:07.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.353 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:07.353 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:07.353 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:07.353 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.353 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:07.353 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.353 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:07.353 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:07.353 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:07.353 nvmf hotplug test: fio failed as expected 00:10:07.353 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.612 rmmod nvme_tcp 00:10:07.612 rmmod nvme_fabrics 00:10:07.612 rmmod nvme_keyring 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 308006 ']' 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 308006 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 308006 ']' 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 308006 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 308006 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 308006' 00:10:07.612 killing process with pid 308006 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 308006 00:10:07.612 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 308006 00:10:07.872 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:07.872 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:07.872 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:07.872 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:07.872 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:10:07.872 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:07.872 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:10:07.872 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.872 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:07.872 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.872 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.872 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.409 00:10:10.409 real 0m27.694s 00:10:10.409 user 1m50.446s 00:10:10.409 sys 0m8.764s 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.409 ************************************ 00:10:10.409 END TEST nvmf_fio_target 00:10:10.409 ************************************ 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.409 ************************************ 00:10:10.409 START TEST nvmf_bdevio 00:10:10.409 ************************************ 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:10.409 * Looking for test storage... 00:10:10.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:10.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.409 --rc genhtml_branch_coverage=1 00:10:10.409 --rc genhtml_function_coverage=1 00:10:10.409 --rc genhtml_legend=1 00:10:10.409 --rc geninfo_all_blocks=1 00:10:10.409 --rc geninfo_unexecuted_blocks=1 00:10:10.409 00:10:10.409 ' 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:10.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.409 --rc genhtml_branch_coverage=1 00:10:10.409 --rc genhtml_function_coverage=1 00:10:10.409 --rc genhtml_legend=1 00:10:10.409 --rc geninfo_all_blocks=1 00:10:10.409 --rc geninfo_unexecuted_blocks=1 00:10:10.409 00:10:10.409 ' 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:10.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.409 --rc genhtml_branch_coverage=1 00:10:10.409 --rc genhtml_function_coverage=1 00:10:10.409 --rc genhtml_legend=1 00:10:10.409 --rc geninfo_all_blocks=1 00:10:10.409 --rc geninfo_unexecuted_blocks=1 00:10:10.409 00:10:10.409 ' 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:10.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.409 --rc genhtml_branch_coverage=1 00:10:10.409 --rc genhtml_function_coverage=1 00:10:10.409 --rc genhtml_legend=1 00:10:10.409 --rc geninfo_all_blocks=1 00:10:10.409 --rc geninfo_unexecuted_blocks=1 00:10:10.409 00:10:10.409 ' 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.409 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:10.410 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:16.982 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:16.982 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:16.982 Found net devices under 0000:86:00.0: cvl_0_0 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:16.982 Found net devices under 0000:86:00.1: cvl_0_1 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.982 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:16.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:10:16.983 00:10:16.983 --- 10.0.0.2 ping statistics --- 00:10:16.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.983 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:10:16.983 00:10:16.983 --- 10.0.0.1 ping statistics --- 00:10:16.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.983 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=315949 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 315949 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 315949 ']' 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.983 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.983 [2024-10-08 18:18:09.502165] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:10:16.983 [2024-10-08 18:18:09.502219] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.983 [2024-10-08 18:18:09.576104] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.983 [2024-10-08 18:18:09.653067] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.983 [2024-10-08 18:18:09.653105] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.983 [2024-10-08 18:18:09.653113] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.983 [2024-10-08 18:18:09.653119] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.983 [2024-10-08 18:18:09.653124] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.983 [2024-10-08 18:18:09.654616] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:10:16.983 [2024-10-08 18:18:09.654724] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:10:16.983 [2024-10-08 18:18:09.654844] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.983 [2024-10-08 18:18:09.654845] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.242 [2024-10-08 18:18:10.394787] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.242 Malloc0 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.242 [2024-10-08 18:18:10.445894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:17.242 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:17.243 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:10:17.243 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:10:17.243 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:17.243 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:17.243 { 00:10:17.243 "params": { 00:10:17.243 "name": "Nvme$subsystem", 00:10:17.243 "trtype": "$TEST_TRANSPORT", 00:10:17.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:17.243 "adrfam": "ipv4", 00:10:17.243 "trsvcid": "$NVMF_PORT", 00:10:17.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:17.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:17.243 "hdgst": ${hdgst:-false}, 00:10:17.243 "ddgst": ${ddgst:-false} 00:10:17.243 }, 00:10:17.243 "method": "bdev_nvme_attach_controller" 00:10:17.243 } 00:10:17.243 EOF 00:10:17.243 )") 00:10:17.243 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:10:17.243 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:10:17.243 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:10:17.243 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:17.243 "params": { 00:10:17.243 "name": "Nvme1", 00:10:17.243 "trtype": "tcp", 00:10:17.243 "traddr": "10.0.0.2", 00:10:17.243 "adrfam": "ipv4", 00:10:17.243 "trsvcid": "4420", 00:10:17.243 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:17.243 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:17.243 "hdgst": false, 00:10:17.243 "ddgst": false 00:10:17.243 }, 00:10:17.243 "method": "bdev_nvme_attach_controller" 00:10:17.243 }' 00:10:17.243 [2024-10-08 18:18:10.498316] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:10:17.243 [2024-10-08 18:18:10.498368] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316207 ] 00:10:17.501 [2024-10-08 18:18:10.569279] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:17.501 [2024-10-08 18:18:10.644313] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.501 [2024-10-08 18:18:10.644423] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.501 [2024-10-08 18:18:10.644423] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.501 I/O targets: 00:10:17.501 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:17.501 00:10:17.501 00:10:17.501 CUnit - A unit testing framework for C - Version 2.1-3 00:10:17.501 http://cunit.sourceforge.net/ 00:10:17.501 00:10:17.501 00:10:17.501 Suite: bdevio tests on: Nvme1n1 00:10:17.759 Test: blockdev write read block ...passed 00:10:17.759 Test: blockdev write zeroes read block ...passed 00:10:17.759 Test: blockdev write zeroes read no split ...passed 00:10:17.759 Test: blockdev write zeroes read split ...passed 00:10:17.759 Test: blockdev write zeroes read split partial ...passed 00:10:17.759 Test: blockdev reset ...[2024-10-08 18:18:10.920908] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:17.759 [2024-10-08 18:18:10.920972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1a400 (9): Bad file descriptor 00:10:17.759 [2024-10-08 18:18:11.023777] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:17.759 passed 00:10:17.759 Test: blockdev write read 8 blocks ...passed 00:10:17.759 Test: blockdev write read size > 128k ...passed 00:10:17.759 Test: blockdev write read invalid size ...passed 00:10:18.016 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:18.016 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:18.016 Test: blockdev write read max offset ...passed 00:10:18.016 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:18.016 Test: blockdev writev readv 8 blocks ...passed 00:10:18.016 Test: blockdev writev readv 30 x 1block ...passed 00:10:18.016 Test: blockdev writev readv block ...passed 00:10:18.016 Test: blockdev writev readv size > 128k ...passed 00:10:18.016 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:18.016 Test: blockdev comparev and writev ...[2024-10-08 18:18:11.317338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:18.016 [2024-10-08 18:18:11.317365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:18.016 [2024-10-08 18:18:11.317383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:18.016 [2024-10-08 18:18:11.317391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:18.016 [2024-10-08 18:18:11.317618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:18.016 [2024-10-08 18:18:11.317628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:18.017 [2024-10-08 18:18:11.317639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:18.017 [2024-10-08 18:18:11.317650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:18.017 [2024-10-08 18:18:11.317891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:18.017 [2024-10-08 18:18:11.317900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:18.017 [2024-10-08 18:18:11.317912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:18.017 [2024-10-08 18:18:11.317918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:18.017 [2024-10-08 18:18:11.318155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:18.017 [2024-10-08 18:18:11.318164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:18.017 [2024-10-08 18:18:11.318175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:18.017 [2024-10-08 18:18:11.318182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:18.274 passed 00:10:18.274 Test: blockdev nvme passthru rw ...passed 00:10:18.274 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:18:11.401744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:18.274 [2024-10-08 18:18:11.401759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:18.274 [2024-10-08 18:18:11.401860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:18.275 [2024-10-08 18:18:11.401869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:18.275 [2024-10-08 18:18:11.401970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:18.275 [2024-10-08 18:18:11.401980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:18.275 [2024-10-08 18:18:11.402082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:18.275 [2024-10-08 18:18:11.402091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:18.275 passed 00:10:18.275 Test: blockdev nvme admin passthru ...passed 00:10:18.275 Test: blockdev copy ...passed 00:10:18.275 00:10:18.275 Run Summary: Type Total Ran Passed Failed Inactive 00:10:18.275 suites 1 1 n/a 0 0 00:10:18.275 tests 23 23 23 0 0 00:10:18.275 asserts 152 152 152 0 n/a 00:10:18.275 00:10:18.275 Elapsed time = 1.305 seconds 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:18.533 rmmod nvme_tcp 00:10:18.533 rmmod nvme_fabrics 00:10:18.533 rmmod nvme_keyring 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 315949 ']' 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 315949 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 315949 ']' 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 315949 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 315949 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 315949' 00:10:18.533 killing process with pid 315949 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 315949 00:10:18.533 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 315949 00:10:18.792 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:18.792 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:18.792 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:18.792 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:18.792 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:10:18.792 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:18.792 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:10:18.792 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:18.792 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:18.792 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.792 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.792 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:21.330 00:10:21.330 real 0m10.844s 00:10:21.330 user 0m13.251s 00:10:21.330 sys 0m5.095s 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.330 ************************************ 00:10:21.330 END TEST nvmf_bdevio 00:10:21.330 ************************************ 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:21.330 00:10:21.330 real 4m47.299s 00:10:21.330 user 10m50.156s 00:10:21.330 sys 1m39.405s 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:21.330 ************************************ 00:10:21.330 END TEST nvmf_target_core 00:10:21.330 ************************************ 00:10:21.330 18:18:14 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:21.330 18:18:14 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:21.330 18:18:14 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.330 18:18:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:21.330 ************************************ 00:10:21.330 START TEST nvmf_target_extra 00:10:21.330 ************************************ 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:21.330 * Looking for test storage... 00:10:21.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:21.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.330 --rc genhtml_branch_coverage=1 00:10:21.330 --rc genhtml_function_coverage=1 00:10:21.330 --rc genhtml_legend=1 00:10:21.330 --rc geninfo_all_blocks=1 00:10:21.330 --rc geninfo_unexecuted_blocks=1 00:10:21.330 00:10:21.330 ' 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:21.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.330 --rc genhtml_branch_coverage=1 00:10:21.330 --rc genhtml_function_coverage=1 00:10:21.330 --rc genhtml_legend=1 00:10:21.330 --rc geninfo_all_blocks=1 00:10:21.330 --rc geninfo_unexecuted_blocks=1 00:10:21.330 00:10:21.330 ' 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:21.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.330 --rc genhtml_branch_coverage=1 00:10:21.330 --rc genhtml_function_coverage=1 00:10:21.330 --rc genhtml_legend=1 00:10:21.330 --rc geninfo_all_blocks=1 00:10:21.330 --rc geninfo_unexecuted_blocks=1 00:10:21.330 00:10:21.330 ' 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:21.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.330 --rc genhtml_branch_coverage=1 00:10:21.330 --rc genhtml_function_coverage=1 00:10:21.330 --rc genhtml_legend=1 00:10:21.330 --rc geninfo_all_blocks=1 00:10:21.330 --rc geninfo_unexecuted_blocks=1 00:10:21.330 00:10:21.330 ' 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.330 18:18:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:21.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:21.331 ************************************ 00:10:21.331 START TEST nvmf_example 00:10:21.331 ************************************ 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:21.331 * Looking for test storage... 00:10:21.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:21.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.331 --rc genhtml_branch_coverage=1 00:10:21.331 --rc genhtml_function_coverage=1 00:10:21.331 --rc genhtml_legend=1 00:10:21.331 --rc geninfo_all_blocks=1 00:10:21.331 --rc geninfo_unexecuted_blocks=1 00:10:21.331 00:10:21.331 ' 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:21.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.331 --rc genhtml_branch_coverage=1 00:10:21.331 --rc genhtml_function_coverage=1 00:10:21.331 --rc genhtml_legend=1 00:10:21.331 --rc geninfo_all_blocks=1 00:10:21.331 --rc geninfo_unexecuted_blocks=1 00:10:21.331 00:10:21.331 ' 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:21.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.331 --rc genhtml_branch_coverage=1 00:10:21.331 --rc genhtml_function_coverage=1 00:10:21.331 --rc genhtml_legend=1 00:10:21.331 --rc geninfo_all_blocks=1 00:10:21.331 --rc geninfo_unexecuted_blocks=1 00:10:21.331 00:10:21.331 ' 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:21.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.331 --rc genhtml_branch_coverage=1 00:10:21.331 --rc genhtml_function_coverage=1 00:10:21.331 --rc genhtml_legend=1 00:10:21.331 --rc geninfo_all_blocks=1 00:10:21.331 --rc geninfo_unexecuted_blocks=1 00:10:21.331 00:10:21.331 ' 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.331 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:21.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:21.332 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.903 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.903 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:27.903 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:27.903 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:27.903 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:27.903 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:27.903 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:27.903 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:27.903 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:27.903 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:27.903 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:27.903 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:27.903 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:27.904 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:27.904 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:27.904 Found net devices under 0000:86:00.0: cvl_0_0 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:27.904 Found net devices under 0000:86:00.1: cvl_0_1 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:27.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:10:27.904 00:10:27.904 --- 10.0.0.2 ping statistics --- 00:10:27.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.904 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:10:27.904 00:10:27.904 --- 10.0.0.1 ping statistics --- 00:10:27.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.904 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=320146 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:27.904 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 320146 00:10:27.905 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 320146 ']' 00:10:27.905 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.905 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:27.905 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.905 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:27.905 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:28.473 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:38.454 Initializing NVMe Controllers 00:10:38.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:38.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:38.454 Initialization complete. Launching workers. 00:10:38.454 ======================================================== 00:10:38.454 Latency(us) 00:10:38.454 Device Information : IOPS MiB/s Average min max 00:10:38.454 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18495.82 72.25 3459.74 694.33 16551.32 00:10:38.454 ======================================================== 00:10:38.454 Total : 18495.82 72.25 3459.74 694.33 16551.32 00:10:38.454 00:10:38.454 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:38.454 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:38.454 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:38.454 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:38.714 rmmod nvme_tcp 00:10:38.714 rmmod nvme_fabrics 00:10:38.714 rmmod nvme_keyring 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 320146 ']' 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 320146 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 320146 ']' 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 320146 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 320146 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 320146' 00:10:38.714 killing process with pid 320146 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 320146 00:10:38.714 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 320146 00:10:38.974 nvmf threads initialize successfully 00:10:38.974 bdev subsystem init successfully 00:10:38.974 created a nvmf target service 00:10:38.974 create targets's poll groups done 00:10:38.974 all subsystems of target started 00:10:38.974 nvmf target is running 00:10:38.974 all subsystems of target stopped 00:10:38.974 destroy targets's poll groups done 00:10:38.974 destroyed the nvmf target service 00:10:38.974 bdev subsystem finish successfully 00:10:38.974 nvmf threads destroy successfully 00:10:38.974 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:38.974 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:38.974 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:38.974 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:38.974 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:10:38.974 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:38.974 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:10:38.974 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:38.974 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:38.974 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.974 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.974 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.882 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:40.882 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:40.882 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:40.882 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:40.882 00:10:40.882 real 0m19.774s 00:10:40.882 user 0m45.676s 00:10:40.882 sys 0m6.100s 00:10:40.882 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.882 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:40.882 ************************************ 00:10:40.882 END TEST nvmf_example 00:10:40.882 ************************************ 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:41.142 ************************************ 00:10:41.142 START TEST nvmf_filesystem 00:10:41.142 ************************************ 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:41.142 * Looking for test storage... 00:10:41.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:41.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.142 --rc genhtml_branch_coverage=1 00:10:41.142 --rc genhtml_function_coverage=1 00:10:41.142 --rc genhtml_legend=1 00:10:41.142 --rc geninfo_all_blocks=1 00:10:41.142 --rc geninfo_unexecuted_blocks=1 00:10:41.142 00:10:41.142 ' 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:41.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.142 --rc genhtml_branch_coverage=1 00:10:41.142 --rc genhtml_function_coverage=1 00:10:41.142 --rc genhtml_legend=1 00:10:41.142 --rc geninfo_all_blocks=1 00:10:41.142 --rc geninfo_unexecuted_blocks=1 00:10:41.142 00:10:41.142 ' 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:41.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.142 --rc genhtml_branch_coverage=1 00:10:41.142 --rc genhtml_function_coverage=1 00:10:41.142 --rc genhtml_legend=1 00:10:41.142 --rc geninfo_all_blocks=1 00:10:41.142 --rc geninfo_unexecuted_blocks=1 00:10:41.142 00:10:41.142 ' 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:41.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.142 --rc genhtml_branch_coverage=1 00:10:41.142 --rc genhtml_function_coverage=1 00:10:41.142 --rc genhtml_legend=1 00:10:41.142 --rc geninfo_all_blocks=1 00:10:41.142 --rc geninfo_unexecuted_blocks=1 00:10:41.142 00:10:41.142 ' 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:41.142 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:41.143 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:41.406 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:41.406 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:41.406 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:41.406 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:41.406 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:41.406 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:41.406 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:41.406 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:41.406 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:41.406 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:41.406 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:41.406 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:41.406 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:41.407 #define SPDK_CONFIG_H 00:10:41.407 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:41.407 #define SPDK_CONFIG_APPS 1 00:10:41.407 #define SPDK_CONFIG_ARCH native 00:10:41.407 #undef SPDK_CONFIG_ASAN 00:10:41.407 #undef SPDK_CONFIG_AVAHI 00:10:41.407 #undef SPDK_CONFIG_CET 00:10:41.407 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:41.407 #define SPDK_CONFIG_COVERAGE 1 00:10:41.407 #define SPDK_CONFIG_CROSS_PREFIX 00:10:41.407 #undef SPDK_CONFIG_CRYPTO 00:10:41.407 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:41.407 #undef SPDK_CONFIG_CUSTOMOCF 00:10:41.407 #undef SPDK_CONFIG_DAOS 00:10:41.407 #define SPDK_CONFIG_DAOS_DIR 00:10:41.407 #define SPDK_CONFIG_DEBUG 1 00:10:41.407 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:41.407 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:41.407 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:41.407 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:41.407 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:41.407 #undef SPDK_CONFIG_DPDK_UADK 00:10:41.407 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:41.407 #define SPDK_CONFIG_EXAMPLES 1 00:10:41.407 #undef SPDK_CONFIG_FC 00:10:41.407 #define SPDK_CONFIG_FC_PATH 00:10:41.407 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:41.407 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:41.407 #define SPDK_CONFIG_FSDEV 1 00:10:41.407 #undef SPDK_CONFIG_FUSE 00:10:41.407 #undef SPDK_CONFIG_FUZZER 00:10:41.407 #define SPDK_CONFIG_FUZZER_LIB 00:10:41.407 #undef SPDK_CONFIG_GOLANG 00:10:41.407 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:41.407 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:41.407 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:41.407 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:41.407 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:41.407 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:41.407 #undef SPDK_CONFIG_HAVE_LZ4 00:10:41.407 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:41.407 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:41.407 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:41.407 #define SPDK_CONFIG_IDXD 1 00:10:41.407 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:41.407 #undef SPDK_CONFIG_IPSEC_MB 00:10:41.407 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:41.407 #define SPDK_CONFIG_ISAL 1 00:10:41.407 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:41.407 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:41.407 #define SPDK_CONFIG_LIBDIR 00:10:41.407 #undef SPDK_CONFIG_LTO 00:10:41.407 #define SPDK_CONFIG_MAX_LCORES 128 00:10:41.407 #define SPDK_CONFIG_NVME_CUSE 1 00:10:41.407 #undef SPDK_CONFIG_OCF 00:10:41.407 #define SPDK_CONFIG_OCF_PATH 00:10:41.407 #define SPDK_CONFIG_OPENSSL_PATH 00:10:41.407 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:41.407 #define SPDK_CONFIG_PGO_DIR 00:10:41.407 #undef SPDK_CONFIG_PGO_USE 00:10:41.407 #define SPDK_CONFIG_PREFIX /usr/local 00:10:41.407 #undef SPDK_CONFIG_RAID5F 00:10:41.407 #undef SPDK_CONFIG_RBD 00:10:41.407 #define SPDK_CONFIG_RDMA 1 00:10:41.407 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:41.407 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:41.407 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:41.407 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:41.407 #define SPDK_CONFIG_SHARED 1 00:10:41.407 #undef SPDK_CONFIG_SMA 00:10:41.407 #define SPDK_CONFIG_TESTS 1 00:10:41.407 #undef SPDK_CONFIG_TSAN 00:10:41.407 #define SPDK_CONFIG_UBLK 1 00:10:41.407 #define SPDK_CONFIG_UBSAN 1 00:10:41.407 #undef SPDK_CONFIG_UNIT_TESTS 00:10:41.407 #undef SPDK_CONFIG_URING 00:10:41.407 #define SPDK_CONFIG_URING_PATH 00:10:41.407 #undef SPDK_CONFIG_URING_ZNS 00:10:41.407 #undef SPDK_CONFIG_USDT 00:10:41.407 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:41.407 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:41.407 #define SPDK_CONFIG_VFIO_USER 1 00:10:41.407 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:41.407 #define SPDK_CONFIG_VHOST 1 00:10:41.407 #define SPDK_CONFIG_VIRTIO 1 00:10:41.407 #undef SPDK_CONFIG_VTUNE 00:10:41.407 #define SPDK_CONFIG_VTUNE_DIR 00:10:41.407 #define SPDK_CONFIG_WERROR 1 00:10:41.407 #define SPDK_CONFIG_WPDK_DIR 00:10:41.407 #undef SPDK_CONFIG_XNVME 00:10:41.407 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:41.407 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:41.408 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 322501 ]] 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 322501 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.GcfjLC 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.GcfjLC/tests/target /tmp/spdk.GcfjLC 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:41.409 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=606707712 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4677722112 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=189771837440 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=195963985920 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6192148480 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97971961856 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981992960 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=39169753088 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=39192797184 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23044096 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97981521920 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981992960 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=471040 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19596386304 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19596398592 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:41.410 * Looking for test storage... 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=189771837440 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8406740992 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.410 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:41.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.411 --rc genhtml_branch_coverage=1 00:10:41.411 --rc genhtml_function_coverage=1 00:10:41.411 --rc genhtml_legend=1 00:10:41.411 --rc geninfo_all_blocks=1 00:10:41.411 --rc geninfo_unexecuted_blocks=1 00:10:41.411 00:10:41.411 ' 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:41.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.411 --rc genhtml_branch_coverage=1 00:10:41.411 --rc genhtml_function_coverage=1 00:10:41.411 --rc genhtml_legend=1 00:10:41.411 --rc geninfo_all_blocks=1 00:10:41.411 --rc geninfo_unexecuted_blocks=1 00:10:41.411 00:10:41.411 ' 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:41.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.411 --rc genhtml_branch_coverage=1 00:10:41.411 --rc genhtml_function_coverage=1 00:10:41.411 --rc genhtml_legend=1 00:10:41.411 --rc geninfo_all_blocks=1 00:10:41.411 --rc geninfo_unexecuted_blocks=1 00:10:41.411 00:10:41.411 ' 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:41.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.411 --rc genhtml_branch_coverage=1 00:10:41.411 --rc genhtml_function_coverage=1 00:10:41.411 --rc genhtml_legend=1 00:10:41.411 --rc geninfo_all_blocks=1 00:10:41.411 --rc geninfo_unexecuted_blocks=1 00:10:41.411 00:10:41.411 ' 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:41.411 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:41.451 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:41.451 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.451 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:41.451 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:41.451 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:41.451 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.451 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.451 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.451 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:41.451 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:41.451 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:41.451 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:48.023 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:48.023 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:48.023 Found net devices under 0000:86:00.0: cvl_0_0 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:48.023 Found net devices under 0000:86:00.1: cvl_0_1 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:48.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:10:48.023 00:10:48.023 --- 10.0.0.2 ping statistics --- 00:10:48.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.023 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:10:48.023 00:10:48.023 --- 10.0.0.1 ping statistics --- 00:10:48.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.023 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.023 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.024 ************************************ 00:10:48.024 START TEST nvmf_filesystem_no_in_capsule 00:10:48.024 ************************************ 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=325589 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 325589 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 325589 ']' 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.024 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.024 [2024-10-08 18:18:40.826475] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:10:48.024 [2024-10-08 18:18:40.826514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.024 [2024-10-08 18:18:40.880551] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.024 [2024-10-08 18:18:40.956253] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.024 [2024-10-08 18:18:40.956295] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.024 [2024-10-08 18:18:40.956302] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.024 [2024-10-08 18:18:40.956308] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.024 [2024-10-08 18:18:40.956313] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.024 [2024-10-08 18:18:40.960394] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.024 [2024-10-08 18:18:40.960431] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.024 [2024-10-08 18:18:40.960538] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.024 [2024-10-08 18:18:40.960539] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.024 [2024-10-08 18:18:41.121310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.024 Malloc1 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.024 [2024-10-08 18:18:41.274814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:48.024 { 00:10:48.024 "name": "Malloc1", 00:10:48.024 "aliases": [ 00:10:48.024 "82bddeb8-da85-425f-8371-86c78b5adb29" 00:10:48.024 ], 00:10:48.024 "product_name": "Malloc disk", 00:10:48.024 "block_size": 512, 00:10:48.024 "num_blocks": 1048576, 00:10:48.024 "uuid": "82bddeb8-da85-425f-8371-86c78b5adb29", 00:10:48.024 "assigned_rate_limits": { 00:10:48.024 "rw_ios_per_sec": 0, 00:10:48.024 "rw_mbytes_per_sec": 0, 00:10:48.024 "r_mbytes_per_sec": 0, 00:10:48.024 "w_mbytes_per_sec": 0 00:10:48.024 }, 00:10:48.024 "claimed": true, 00:10:48.024 "claim_type": "exclusive_write", 00:10:48.024 "zoned": false, 00:10:48.024 "supported_io_types": { 00:10:48.024 "read": true, 00:10:48.024 "write": true, 00:10:48.024 "unmap": true, 00:10:48.024 "flush": true, 00:10:48.024 "reset": true, 00:10:48.024 "nvme_admin": false, 00:10:48.024 "nvme_io": false, 00:10:48.024 "nvme_io_md": false, 00:10:48.024 "write_zeroes": true, 00:10:48.024 "zcopy": true, 00:10:48.024 "get_zone_info": false, 00:10:48.024 "zone_management": false, 00:10:48.024 "zone_append": false, 00:10:48.024 "compare": false, 00:10:48.024 "compare_and_write": false, 00:10:48.024 "abort": true, 00:10:48.024 "seek_hole": false, 00:10:48.024 "seek_data": false, 00:10:48.024 "copy": true, 00:10:48.024 "nvme_iov_md": false 00:10:48.024 }, 00:10:48.024 "memory_domains": [ 00:10:48.024 { 00:10:48.024 "dma_device_id": "system", 00:10:48.024 "dma_device_type": 1 00:10:48.024 }, 00:10:48.024 { 00:10:48.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.024 "dma_device_type": 2 00:10:48.024 } 00:10:48.024 ], 00:10:48.024 "driver_specific": {} 00:10:48.024 } 00:10:48.024 ]' 00:10:48.024 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:48.284 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:48.284 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:48.284 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:48.284 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:48.284 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:48.284 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:48.284 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.667 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:49.667 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:49.667 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.667 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:49.667 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:51.570 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:51.570 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:51.570 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.570 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:51.571 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.571 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:51.571 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:51.571 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:51.571 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:51.571 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:51.571 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:51.571 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:51.571 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:51.571 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:51.571 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:51.571 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:51.571 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:51.571 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:52.138 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:53.073 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:53.073 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:53.073 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:53.073 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.073 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.073 ************************************ 00:10:53.073 START TEST filesystem_ext4 00:10:53.073 ************************************ 00:10:53.073 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:53.073 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:53.073 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:53.073 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:53.073 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:53.074 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:53.074 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:53.074 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:53.074 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:53.074 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:53.074 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:53.074 mke2fs 1.47.0 (5-Feb-2023) 00:10:53.074 Discarding device blocks: 0/522240 done 00:10:53.332 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:53.332 Filesystem UUID: 36b1ced2-daa5-414d-9b7d-a0025e51011e 00:10:53.332 Superblock backups stored on blocks: 00:10:53.332 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:53.332 00:10:53.332 Allocating group tables: 0/64 done 00:10:53.332 Writing inode tables: 0/64 done 00:10:53.900 Creating journal (8192 blocks): done 00:10:53.900 Writing superblocks and filesystem accounting information: 0/64 done 00:10:53.900 00:10:53.900 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:53.900 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:59.171 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:59.171 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:59.171 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:59.171 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:59.171 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:59.171 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:59.171 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 325589 00:10:59.171 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:59.171 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:59.171 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:59.171 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:59.171 00:10:59.171 real 0m6.175s 00:10:59.171 user 0m0.027s 00:10:59.171 sys 0m0.072s 00:10:59.171 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.171 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:59.171 ************************************ 00:10:59.171 END TEST filesystem_ext4 00:10:59.171 ************************************ 00:10:59.430 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:59.430 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:59.430 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.430 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.430 ************************************ 00:10:59.430 START TEST filesystem_btrfs 00:10:59.430 ************************************ 00:10:59.430 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:59.430 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:59.430 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:59.430 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:59.430 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:59.430 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:59.430 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:59.430 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:59.430 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:59.430 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:59.430 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:59.430 btrfs-progs v6.8.1 00:10:59.430 See https://btrfs.readthedocs.io for more information. 00:10:59.430 00:10:59.430 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:59.430 NOTE: several default settings have changed in version 5.15, please make sure 00:10:59.430 this does not affect your deployments: 00:10:59.430 - DUP for metadata (-m dup) 00:10:59.430 - enabled no-holes (-O no-holes) 00:10:59.430 - enabled free-space-tree (-R free-space-tree) 00:10:59.430 00:10:59.430 Label: (null) 00:10:59.430 UUID: a0742b45-6d63-469b-9747-c309911d752b 00:10:59.430 Node size: 16384 00:10:59.430 Sector size: 4096 (CPU page size: 4096) 00:10:59.430 Filesystem size: 510.00MiB 00:10:59.430 Block group profiles: 00:10:59.430 Data: single 8.00MiB 00:10:59.431 Metadata: DUP 32.00MiB 00:10:59.431 System: DUP 8.00MiB 00:10:59.431 SSD detected: yes 00:10:59.431 Zoned device: no 00:10:59.431 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:59.431 Checksum: crc32c 00:10:59.431 Number of devices: 1 00:10:59.431 Devices: 00:10:59.431 ID SIZE PATH 00:10:59.431 1 510.00MiB /dev/nvme0n1p1 00:10:59.431 00:10:59.431 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:59.431 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:59.689 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:59.689 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:59.948 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:59.948 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:59.948 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:59.948 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:59.948 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 325589 00:10:59.948 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:59.948 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:59.948 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:59.948 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:59.948 00:10:59.948 real 0m0.541s 00:10:59.948 user 0m0.027s 00:10:59.948 sys 0m0.115s 00:10:59.949 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.949 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:59.949 ************************************ 00:10:59.949 END TEST filesystem_btrfs 00:10:59.949 ************************************ 00:10:59.949 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:59.949 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:59.949 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.949 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.949 ************************************ 00:10:59.949 START TEST filesystem_xfs 00:10:59.949 ************************************ 00:10:59.949 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:59.949 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:59.949 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:59.949 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:59.949 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:59.949 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:59.949 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:59.949 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:59.949 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:59.949 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:59.949 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:59.949 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:59.949 = sectsz=512 attr=2, projid32bit=1 00:10:59.949 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:59.949 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:59.949 data = bsize=4096 blocks=130560, imaxpct=25 00:10:59.949 = sunit=0 swidth=0 blks 00:10:59.949 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:59.949 log =internal log bsize=4096 blocks=16384, version=2 00:10:59.949 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:59.949 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:01.324 Discarding blocks...Done. 00:11:01.324 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:01.324 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:03.858 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:03.858 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:03.858 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:03.858 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:03.858 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:03.858 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:03.858 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 325589 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:03.859 00:11:03.859 real 0m3.647s 00:11:03.859 user 0m0.021s 00:11:03.859 sys 0m0.079s 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:03.859 ************************************ 00:11:03.859 END TEST filesystem_xfs 00:11:03.859 ************************************ 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:03.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 325589 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 325589 ']' 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 325589 00:11:03.859 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:03.859 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:03.859 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 325589 00:11:03.859 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:03.859 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:03.859 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 325589' 00:11:03.859 killing process with pid 325589 00:11:03.859 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 325589 00:11:03.859 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 325589 00:11:04.134 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:04.134 00:11:04.134 real 0m16.630s 00:11:04.134 user 1m5.386s 00:11:04.134 sys 0m1.326s 00:11:04.134 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.134 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.134 ************************************ 00:11:04.134 END TEST nvmf_filesystem_no_in_capsule 00:11:04.134 ************************************ 00:11:04.134 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:04.134 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:04.134 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.134 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:04.433 ************************************ 00:11:04.433 START TEST nvmf_filesystem_in_capsule 00:11:04.433 ************************************ 00:11:04.433 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:04.433 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:04.433 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:04.433 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:04.433 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:04.433 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.433 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=328573 00:11:04.433 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 328573 00:11:04.433 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.433 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 328573 ']' 00:11:04.433 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.433 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.433 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.433 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.433 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.433 [2024-10-08 18:18:57.532628] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:11:04.433 [2024-10-08 18:18:57.532676] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.433 [2024-10-08 18:18:57.605717] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.433 [2024-10-08 18:18:57.684295] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.433 [2024-10-08 18:18:57.684330] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.433 [2024-10-08 18:18:57.684337] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.433 [2024-10-08 18:18:57.684342] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.433 [2024-10-08 18:18:57.684348] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.433 [2024-10-08 18:18:57.685865] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.433 [2024-10-08 18:18:57.685975] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.433 [2024-10-08 18:18:57.686081] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.433 [2024-10-08 18:18:57.686082] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.077 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:05.077 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:05.077 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:05.077 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:05.077 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.077 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.077 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:05.077 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:05.335 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.335 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.335 [2024-10-08 18:18:58.402357] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.335 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.335 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:05.335 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.335 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.335 Malloc1 00:11:05.335 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.335 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:05.335 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.335 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.335 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.335 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:05.335 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.335 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.335 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.336 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.336 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.336 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.336 [2024-10-08 18:18:58.546207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.336 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.336 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:05.336 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:05.336 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:05.336 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:05.336 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:05.336 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:05.336 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.336 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.336 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.336 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:05.336 { 00:11:05.336 "name": "Malloc1", 00:11:05.336 "aliases": [ 00:11:05.336 "1f107af4-6eee-42c8-b631-39dc857d9e4a" 00:11:05.336 ], 00:11:05.336 "product_name": "Malloc disk", 00:11:05.336 "block_size": 512, 00:11:05.336 "num_blocks": 1048576, 00:11:05.336 "uuid": "1f107af4-6eee-42c8-b631-39dc857d9e4a", 00:11:05.336 "assigned_rate_limits": { 00:11:05.336 "rw_ios_per_sec": 0, 00:11:05.336 "rw_mbytes_per_sec": 0, 00:11:05.336 "r_mbytes_per_sec": 0, 00:11:05.336 "w_mbytes_per_sec": 0 00:11:05.336 }, 00:11:05.336 "claimed": true, 00:11:05.336 "claim_type": "exclusive_write", 00:11:05.336 "zoned": false, 00:11:05.336 "supported_io_types": { 00:11:05.336 "read": true, 00:11:05.336 "write": true, 00:11:05.336 "unmap": true, 00:11:05.336 "flush": true, 00:11:05.336 "reset": true, 00:11:05.336 "nvme_admin": false, 00:11:05.336 "nvme_io": false, 00:11:05.336 "nvme_io_md": false, 00:11:05.336 "write_zeroes": true, 00:11:05.336 "zcopy": true, 00:11:05.336 "get_zone_info": false, 00:11:05.336 "zone_management": false, 00:11:05.336 "zone_append": false, 00:11:05.336 "compare": false, 00:11:05.336 "compare_and_write": false, 00:11:05.336 "abort": true, 00:11:05.336 "seek_hole": false, 00:11:05.336 "seek_data": false, 00:11:05.336 "copy": true, 00:11:05.336 "nvme_iov_md": false 00:11:05.336 }, 00:11:05.336 "memory_domains": [ 00:11:05.336 { 00:11:05.336 "dma_device_id": "system", 00:11:05.336 "dma_device_type": 1 00:11:05.336 }, 00:11:05.336 { 00:11:05.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.336 "dma_device_type": 2 00:11:05.336 } 00:11:05.336 ], 00:11:05.336 "driver_specific": {} 00:11:05.336 } 00:11:05.336 ]' 00:11:05.336 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:05.336 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:05.336 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:05.595 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:05.595 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:05.595 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:05.595 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:05.595 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.532 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:06.532 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:06.532 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.532 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:06.532 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:09.068 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:09.068 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:09.068 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.068 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:09.068 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.068 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:09.068 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:09.068 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:09.068 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:09.068 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:09.068 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:09.068 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:09.068 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:09.068 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:09.068 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:09.068 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:09.068 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:09.068 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:09.326 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:10.703 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:10.703 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:10.703 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:10.703 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.703 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.703 ************************************ 00:11:10.703 START TEST filesystem_in_capsule_ext4 00:11:10.703 ************************************ 00:11:10.703 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:10.703 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:10.703 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:10.703 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:10.703 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:10.703 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:10.703 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:10.703 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:10.703 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:10.703 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:10.703 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:10.703 mke2fs 1.47.0 (5-Feb-2023) 00:11:10.703 Discarding device blocks: 0/522240 done 00:11:10.703 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:10.703 Filesystem UUID: 52fc2dda-1826-4c29-9f61-d76ff8594212 00:11:10.703 Superblock backups stored on blocks: 00:11:10.703 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:10.703 00:11:10.703 Allocating group tables: 0/64 done 00:11:10.703 Writing inode tables: 0/64 done 00:11:13.280 Creating journal (8192 blocks): done 00:11:15.483 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:11:15.483 00:11:15.483 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:15.483 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:20.754 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:20.754 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:20.754 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:20.754 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:20.754 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:20.754 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:20.754 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 328573 00:11:20.754 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:20.754 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:20.754 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:20.754 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:20.754 00:11:20.754 real 0m10.417s 00:11:20.754 user 0m0.036s 00:11:20.754 sys 0m0.069s 00:11:20.754 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:20.754 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:20.754 ************************************ 00:11:20.754 END TEST filesystem_in_capsule_ext4 00:11:20.754 ************************************ 00:11:21.013 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:21.013 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:21.013 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.013 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.013 ************************************ 00:11:21.013 START TEST filesystem_in_capsule_btrfs 00:11:21.013 ************************************ 00:11:21.013 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:21.013 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:21.013 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:21.013 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:21.013 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:21.013 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:21.013 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:21.013 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:21.013 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:21.013 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:21.013 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:21.272 btrfs-progs v6.8.1 00:11:21.272 See https://btrfs.readthedocs.io for more information. 00:11:21.272 00:11:21.272 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:21.272 NOTE: several default settings have changed in version 5.15, please make sure 00:11:21.272 this does not affect your deployments: 00:11:21.272 - DUP for metadata (-m dup) 00:11:21.272 - enabled no-holes (-O no-holes) 00:11:21.272 - enabled free-space-tree (-R free-space-tree) 00:11:21.272 00:11:21.272 Label: (null) 00:11:21.272 UUID: 2f9eb84f-cbba-4cfb-ab54-e788dd7e4838 00:11:21.272 Node size: 16384 00:11:21.272 Sector size: 4096 (CPU page size: 4096) 00:11:21.272 Filesystem size: 510.00MiB 00:11:21.272 Block group profiles: 00:11:21.272 Data: single 8.00MiB 00:11:21.272 Metadata: DUP 32.00MiB 00:11:21.272 System: DUP 8.00MiB 00:11:21.272 SSD detected: yes 00:11:21.272 Zoned device: no 00:11:21.272 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:21.272 Checksum: crc32c 00:11:21.272 Number of devices: 1 00:11:21.272 Devices: 00:11:21.272 ID SIZE PATH 00:11:21.272 1 510.00MiB /dev/nvme0n1p1 00:11:21.272 00:11:21.272 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:21.272 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:22.208 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:22.208 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:22.208 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:22.208 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:22.208 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:22.208 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:22.208 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 328573 00:11:22.208 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:22.208 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:22.208 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:22.208 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:22.208 00:11:22.208 real 0m1.328s 00:11:22.208 user 0m0.024s 00:11:22.208 sys 0m0.119s 00:11:22.208 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.208 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:22.208 ************************************ 00:11:22.208 END TEST filesystem_in_capsule_btrfs 00:11:22.208 ************************************ 00:11:22.208 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:22.208 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:22.208 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.208 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.467 ************************************ 00:11:22.467 START TEST filesystem_in_capsule_xfs 00:11:22.467 ************************************ 00:11:22.467 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:22.467 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:22.468 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:22.468 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:22.468 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:22.468 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:22.468 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:22.468 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:22.468 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:22.468 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:22.468 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:22.468 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:22.468 = sectsz=512 attr=2, projid32bit=1 00:11:22.468 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:22.468 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:22.468 data = bsize=4096 blocks=130560, imaxpct=25 00:11:22.468 = sunit=0 swidth=0 blks 00:11:22.468 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:22.468 log =internal log bsize=4096 blocks=16384, version=2 00:11:22.468 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:22.468 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:23.845 Discarding blocks...Done. 00:11:23.845 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:23.845 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:25.222 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:25.222 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:25.222 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:25.222 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:25.222 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:25.222 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:25.222 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 328573 00:11:25.222 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:25.222 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:25.222 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:25.222 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:25.222 00:11:25.222 real 0m2.992s 00:11:25.222 user 0m0.035s 00:11:25.222 sys 0m0.066s 00:11:25.222 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.222 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:25.222 ************************************ 00:11:25.222 END TEST filesystem_in_capsule_xfs 00:11:25.222 ************************************ 00:11:25.480 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 328573 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 328573 ']' 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 328573 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:25.739 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 328573 00:11:25.739 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:25.739 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:25.739 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 328573' 00:11:25.739 killing process with pid 328573 00:11:25.740 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 328573 00:11:25.740 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 328573 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:26.308 00:11:26.308 real 0m21.915s 00:11:26.308 user 1m26.285s 00:11:26.308 sys 0m1.594s 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.308 ************************************ 00:11:26.308 END TEST nvmf_filesystem_in_capsule 00:11:26.308 ************************************ 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.308 rmmod nvme_tcp 00:11:26.308 rmmod nvme_fabrics 00:11:26.308 rmmod nvme_keyring 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.308 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:28.843 00:11:28.843 real 0m47.303s 00:11:28.843 user 2m33.777s 00:11:28.843 sys 0m7.586s 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.843 ************************************ 00:11:28.843 END TEST nvmf_filesystem 00:11:28.843 ************************************ 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:28.843 ************************************ 00:11:28.843 START TEST nvmf_target_discovery 00:11:28.843 ************************************ 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:28.843 * Looking for test storage... 00:11:28.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.843 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:28.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.844 --rc genhtml_branch_coverage=1 00:11:28.844 --rc genhtml_function_coverage=1 00:11:28.844 --rc genhtml_legend=1 00:11:28.844 --rc geninfo_all_blocks=1 00:11:28.844 --rc geninfo_unexecuted_blocks=1 00:11:28.844 00:11:28.844 ' 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:28.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.844 --rc genhtml_branch_coverage=1 00:11:28.844 --rc genhtml_function_coverage=1 00:11:28.844 --rc genhtml_legend=1 00:11:28.844 --rc geninfo_all_blocks=1 00:11:28.844 --rc geninfo_unexecuted_blocks=1 00:11:28.844 00:11:28.844 ' 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:28.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.844 --rc genhtml_branch_coverage=1 00:11:28.844 --rc genhtml_function_coverage=1 00:11:28.844 --rc genhtml_legend=1 00:11:28.844 --rc geninfo_all_blocks=1 00:11:28.844 --rc geninfo_unexecuted_blocks=1 00:11:28.844 00:11:28.844 ' 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:28.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.844 --rc genhtml_branch_coverage=1 00:11:28.844 --rc genhtml_function_coverage=1 00:11:28.844 --rc genhtml_legend=1 00:11:28.844 --rc geninfo_all_blocks=1 00:11:28.844 --rc geninfo_unexecuted_blocks=1 00:11:28.844 00:11:28.844 ' 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:28.844 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:35.412 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:35.412 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:35.412 Found net devices under 0000:86:00.0: cvl_0_0 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:35.412 Found net devices under 0000:86:00.1: cvl_0_1 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:35.412 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:35.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:11:35.413 00:11:35.413 --- 10.0.0.2 ping statistics --- 00:11:35.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.413 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:11:35.413 00:11:35.413 --- 10.0.0.1 ping statistics --- 00:11:35.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.413 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=336001 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 336001 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 336001 ']' 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:35.413 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.413 [2024-10-08 18:19:27.902213] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:11:35.413 [2024-10-08 18:19:27.902255] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.413 [2024-10-08 18:19:27.971114] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.413 [2024-10-08 18:19:28.046572] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.413 [2024-10-08 18:19:28.046611] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.413 [2024-10-08 18:19:28.046618] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.413 [2024-10-08 18:19:28.046626] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.413 [2024-10-08 18:19:28.046631] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.413 [2024-10-08 18:19:28.048098] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.413 [2024-10-08 18:19:28.048212] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.413 [2024-10-08 18:19:28.048317] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.413 [2024-10-08 18:19:28.048317] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.413 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.413 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:35.413 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:35.413 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.413 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.672 [2024-10-08 18:19:28.774703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.672 Null1 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.672 [2024-10-08 18:19:28.820167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.672 Null2 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.672 Null3 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.672 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.673 Null4 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.673 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:35.937 00:11:35.937 Discovery Log Number of Records 6, Generation counter 6 00:11:35.937 =====Discovery Log Entry 0====== 00:11:35.937 trtype: tcp 00:11:35.937 adrfam: ipv4 00:11:35.937 subtype: current discovery subsystem 00:11:35.937 treq: not required 00:11:35.937 portid: 0 00:11:35.937 trsvcid: 4420 00:11:35.937 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:35.937 traddr: 10.0.0.2 00:11:35.937 eflags: explicit discovery connections, duplicate discovery information 00:11:35.937 sectype: none 00:11:35.937 =====Discovery Log Entry 1====== 00:11:35.937 trtype: tcp 00:11:35.937 adrfam: ipv4 00:11:35.937 subtype: nvme subsystem 00:11:35.937 treq: not required 00:11:35.937 portid: 0 00:11:35.937 trsvcid: 4420 00:11:35.937 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:35.937 traddr: 10.0.0.2 00:11:35.937 eflags: none 00:11:35.937 sectype: none 00:11:35.937 =====Discovery Log Entry 2====== 00:11:35.937 trtype: tcp 00:11:35.937 adrfam: ipv4 00:11:35.937 subtype: nvme subsystem 00:11:35.937 treq: not required 00:11:35.937 portid: 0 00:11:35.937 trsvcid: 4420 00:11:35.937 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:35.937 traddr: 10.0.0.2 00:11:35.937 eflags: none 00:11:35.937 sectype: none 00:11:35.937 =====Discovery Log Entry 3====== 00:11:35.937 trtype: tcp 00:11:35.937 adrfam: ipv4 00:11:35.937 subtype: nvme subsystem 00:11:35.937 treq: not required 00:11:35.937 portid: 0 00:11:35.937 trsvcid: 4420 00:11:35.937 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:35.937 traddr: 10.0.0.2 00:11:35.937 eflags: none 00:11:35.937 sectype: none 00:11:35.937 =====Discovery Log Entry 4====== 00:11:35.937 trtype: tcp 00:11:35.937 adrfam: ipv4 00:11:35.937 subtype: nvme subsystem 00:11:35.937 treq: not required 00:11:35.937 portid: 0 00:11:35.937 trsvcid: 4420 00:11:35.937 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:35.937 traddr: 10.0.0.2 00:11:35.937 eflags: none 00:11:35.937 sectype: none 00:11:35.937 =====Discovery Log Entry 5====== 00:11:35.937 trtype: tcp 00:11:35.937 adrfam: ipv4 00:11:35.937 subtype: discovery subsystem referral 00:11:35.937 treq: not required 00:11:35.937 portid: 0 00:11:35.937 trsvcid: 4430 00:11:35.937 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:35.937 traddr: 10.0.0.2 00:11:35.937 eflags: none 00:11:35.937 sectype: none 00:11:35.937 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:35.937 Perform nvmf subsystem discovery via RPC 00:11:35.937 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:35.937 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.937 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.937 [ 00:11:35.937 { 00:11:35.937 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:35.937 "subtype": "Discovery", 00:11:35.937 "listen_addresses": [ 00:11:35.937 { 00:11:35.937 "trtype": "TCP", 00:11:35.937 "adrfam": "IPv4", 00:11:35.937 "traddr": "10.0.0.2", 00:11:35.937 "trsvcid": "4420" 00:11:35.937 } 00:11:35.937 ], 00:11:35.937 "allow_any_host": true, 00:11:35.937 "hosts": [] 00:11:35.937 }, 00:11:35.937 { 00:11:35.937 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.937 "subtype": "NVMe", 00:11:35.937 "listen_addresses": [ 00:11:35.937 { 00:11:35.937 "trtype": "TCP", 00:11:35.937 "adrfam": "IPv4", 00:11:35.937 "traddr": "10.0.0.2", 00:11:35.937 "trsvcid": "4420" 00:11:35.937 } 00:11:35.937 ], 00:11:35.937 "allow_any_host": true, 00:11:35.937 "hosts": [], 00:11:35.937 "serial_number": "SPDK00000000000001", 00:11:35.937 "model_number": "SPDK bdev Controller", 00:11:35.937 "max_namespaces": 32, 00:11:35.937 "min_cntlid": 1, 00:11:35.937 "max_cntlid": 65519, 00:11:35.937 "namespaces": [ 00:11:35.937 { 00:11:35.937 "nsid": 1, 00:11:35.937 "bdev_name": "Null1", 00:11:35.937 "name": "Null1", 00:11:35.937 "nguid": "ABCC018FBBE645E89911A3707A943E28", 00:11:35.937 "uuid": "abcc018f-bbe6-45e8-9911-a3707a943e28" 00:11:35.937 } 00:11:35.937 ] 00:11:35.937 }, 00:11:35.937 { 00:11:35.937 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:35.937 "subtype": "NVMe", 00:11:35.937 "listen_addresses": [ 00:11:35.937 { 00:11:35.937 "trtype": "TCP", 00:11:35.937 "adrfam": "IPv4", 00:11:35.937 "traddr": "10.0.0.2", 00:11:35.937 "trsvcid": "4420" 00:11:35.937 } 00:11:35.937 ], 00:11:35.937 "allow_any_host": true, 00:11:35.937 "hosts": [], 00:11:35.937 "serial_number": "SPDK00000000000002", 00:11:35.937 "model_number": "SPDK bdev Controller", 00:11:35.937 "max_namespaces": 32, 00:11:35.937 "min_cntlid": 1, 00:11:35.937 "max_cntlid": 65519, 00:11:35.937 "namespaces": [ 00:11:35.937 { 00:11:35.937 "nsid": 1, 00:11:35.938 "bdev_name": "Null2", 00:11:35.938 "name": "Null2", 00:11:35.938 "nguid": "F2C0F7137F344F1483680BB096E20822", 00:11:35.938 "uuid": "f2c0f713-7f34-4f14-8368-0bb096e20822" 00:11:35.938 } 00:11:35.938 ] 00:11:35.938 }, 00:11:35.938 { 00:11:35.938 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:35.938 "subtype": "NVMe", 00:11:35.938 "listen_addresses": [ 00:11:35.938 { 00:11:35.938 "trtype": "TCP", 00:11:35.938 "adrfam": "IPv4", 00:11:35.938 "traddr": "10.0.0.2", 00:11:35.938 "trsvcid": "4420" 00:11:35.938 } 00:11:35.938 ], 00:11:35.938 "allow_any_host": true, 00:11:35.938 "hosts": [], 00:11:35.938 "serial_number": "SPDK00000000000003", 00:11:35.938 "model_number": "SPDK bdev Controller", 00:11:35.938 "max_namespaces": 32, 00:11:35.938 "min_cntlid": 1, 00:11:35.938 "max_cntlid": 65519, 00:11:35.938 "namespaces": [ 00:11:35.938 { 00:11:35.938 "nsid": 1, 00:11:35.938 "bdev_name": "Null3", 00:11:35.938 "name": "Null3", 00:11:35.938 "nguid": "F501DA6CE9BC437D9A7BB04A9C9CDF81", 00:11:35.938 "uuid": "f501da6c-e9bc-437d-9a7b-b04a9c9cdf81" 00:11:35.938 } 00:11:35.938 ] 00:11:35.938 }, 00:11:35.938 { 00:11:35.938 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:35.938 "subtype": "NVMe", 00:11:35.938 "listen_addresses": [ 00:11:35.938 { 00:11:35.938 "trtype": "TCP", 00:11:35.938 "adrfam": "IPv4", 00:11:35.938 "traddr": "10.0.0.2", 00:11:35.938 "trsvcid": "4420" 00:11:35.938 } 00:11:35.938 ], 00:11:35.938 "allow_any_host": true, 00:11:35.938 "hosts": [], 00:11:35.938 "serial_number": "SPDK00000000000004", 00:11:35.938 "model_number": "SPDK bdev Controller", 00:11:35.938 "max_namespaces": 32, 00:11:35.938 "min_cntlid": 1, 00:11:35.938 "max_cntlid": 65519, 00:11:35.938 "namespaces": [ 00:11:35.938 { 00:11:35.938 "nsid": 1, 00:11:35.938 "bdev_name": "Null4", 00:11:35.938 "name": "Null4", 00:11:35.938 "nguid": "E8293261C8F54D6CA62D8742462F4834", 00:11:35.938 "uuid": "e8293261-c8f5-4d6c-a62d-8742462f4834" 00:11:35.938 } 00:11:35.938 ] 00:11:35.938 } 00:11:35.938 ] 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:35.938 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:36.199 rmmod nvme_tcp 00:11:36.199 rmmod nvme_fabrics 00:11:36.199 rmmod nvme_keyring 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 336001 ']' 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 336001 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 336001 ']' 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 336001 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:36.199 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 336001 00:11:36.200 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:36.200 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:36.200 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 336001' 00:11:36.200 killing process with pid 336001 00:11:36.200 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 336001 00:11:36.200 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 336001 00:11:36.459 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:36.459 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:36.459 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:36.459 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:36.459 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:11:36.459 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:11:36.459 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:36.459 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:36.459 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:36.459 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.459 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.459 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.365 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:38.365 00:11:38.365 real 0m10.040s 00:11:38.365 user 0m8.167s 00:11:38.365 sys 0m4.876s 00:11:38.365 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.365 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.365 ************************************ 00:11:38.365 END TEST nvmf_target_discovery 00:11:38.365 ************************************ 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:38.624 ************************************ 00:11:38.624 START TEST nvmf_referrals 00:11:38.624 ************************************ 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:38.624 * Looking for test storage... 00:11:38.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:38.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.624 --rc genhtml_branch_coverage=1 00:11:38.624 --rc genhtml_function_coverage=1 00:11:38.624 --rc genhtml_legend=1 00:11:38.624 --rc geninfo_all_blocks=1 00:11:38.624 --rc geninfo_unexecuted_blocks=1 00:11:38.624 00:11:38.624 ' 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:38.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.624 --rc genhtml_branch_coverage=1 00:11:38.624 --rc genhtml_function_coverage=1 00:11:38.624 --rc genhtml_legend=1 00:11:38.624 --rc geninfo_all_blocks=1 00:11:38.624 --rc geninfo_unexecuted_blocks=1 00:11:38.624 00:11:38.624 ' 00:11:38.624 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:38.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.625 --rc genhtml_branch_coverage=1 00:11:38.625 --rc genhtml_function_coverage=1 00:11:38.625 --rc genhtml_legend=1 00:11:38.625 --rc geninfo_all_blocks=1 00:11:38.625 --rc geninfo_unexecuted_blocks=1 00:11:38.625 00:11:38.625 ' 00:11:38.625 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:38.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.625 --rc genhtml_branch_coverage=1 00:11:38.625 --rc genhtml_function_coverage=1 00:11:38.625 --rc genhtml_legend=1 00:11:38.625 --rc geninfo_all_blocks=1 00:11:38.625 --rc geninfo_unexecuted_blocks=1 00:11:38.625 00:11:38.625 ' 00:11:38.625 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.625 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:38.625 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.625 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.625 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.625 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.625 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.625 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.625 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.625 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.625 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:38.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:38.885 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:45.455 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:45.455 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:45.455 Found net devices under 0000:86:00.0: cvl_0_0 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:45.455 Found net devices under 0000:86:00.1: cvl_0_1 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:45.455 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:45.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:11:45.456 00:11:45.456 --- 10.0.0.2 ping statistics --- 00:11:45.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.456 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:45.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:11:45.456 00:11:45.456 --- 10.0.0.1 ping statistics --- 00:11:45.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.456 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=339784 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 339784 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 339784 ']' 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:45.456 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.456 [2024-10-08 18:19:38.030387] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:11:45.456 [2024-10-08 18:19:38.030443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:45.456 [2024-10-08 18:19:38.103440] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:45.456 [2024-10-08 18:19:38.182197] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:45.456 [2024-10-08 18:19:38.182233] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:45.456 [2024-10-08 18:19:38.182240] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:45.456 [2024-10-08 18:19:38.182246] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:45.456 [2024-10-08 18:19:38.182252] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:45.456 [2024-10-08 18:19:38.183765] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.456 [2024-10-08 18:19:38.183875] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.456 [2024-10-08 18:19:38.183981] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.456 [2024-10-08 18:19:38.183983] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.715 [2024-10-08 18:19:38.925818] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.715 [2024-10-08 18:19:38.939124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.715 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:45.716 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:45.716 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.716 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.716 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.716 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:45.716 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:45.716 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:45.716 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:45.716 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:45.716 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.716 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:45.716 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.716 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.974 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:45.974 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:45.974 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:45.974 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:45.974 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:45.974 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.974 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:45.974 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:45.974 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:45.974 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:45.974 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:45.974 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.974 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.974 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.974 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:45.974 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.974 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.233 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.233 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:46.233 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.233 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.233 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.233 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:46.233 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:46.233 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.233 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.233 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.233 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:46.233 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:46.233 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:46.233 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:46.233 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:46.233 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:46.233 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:46.492 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:46.751 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:46.751 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:46.751 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:46.751 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:46.751 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:46.751 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:46.751 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:46.751 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:46.751 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:46.751 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:46.751 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:46.751 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:46.751 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:47.010 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:47.269 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:47.269 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:47.269 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:47.269 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:47.269 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:47.269 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:47.269 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:47.269 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:47.269 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:47.269 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:47.269 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:47.269 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:47.269 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:47.527 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:47.527 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:47.527 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.527 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.527 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.527 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:47.527 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:47.527 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.527 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.527 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.527 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:47.527 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:47.527 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:47.527 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:47.527 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:47.527 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:47.527 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:47.785 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.786 rmmod nvme_tcp 00:11:47.786 rmmod nvme_fabrics 00:11:47.786 rmmod nvme_keyring 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 339784 ']' 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 339784 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 339784 ']' 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 339784 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:47.786 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 339784 00:11:48.073 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:48.073 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:48.073 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 339784' 00:11:48.073 killing process with pid 339784 00:11:48.073 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 339784 00:11:48.073 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 339784 00:11:48.073 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:48.073 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:48.073 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:48.073 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:48.073 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:11:48.073 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:48.074 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:11:48.074 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:48.074 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:48.074 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.074 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.074 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:50.610 00:11:50.610 real 0m11.651s 00:11:50.610 user 0m15.227s 00:11:50.610 sys 0m5.291s 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.610 ************************************ 00:11:50.610 END TEST nvmf_referrals 00:11:50.610 ************************************ 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:50.610 ************************************ 00:11:50.610 START TEST nvmf_connect_disconnect 00:11:50.610 ************************************ 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:50.610 * Looking for test storage... 00:11:50.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.610 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:50.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.610 --rc genhtml_branch_coverage=1 00:11:50.610 --rc genhtml_function_coverage=1 00:11:50.610 --rc genhtml_legend=1 00:11:50.610 --rc geninfo_all_blocks=1 00:11:50.611 --rc geninfo_unexecuted_blocks=1 00:11:50.611 00:11:50.611 ' 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:50.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.611 --rc genhtml_branch_coverage=1 00:11:50.611 --rc genhtml_function_coverage=1 00:11:50.611 --rc genhtml_legend=1 00:11:50.611 --rc geninfo_all_blocks=1 00:11:50.611 --rc geninfo_unexecuted_blocks=1 00:11:50.611 00:11:50.611 ' 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:50.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.611 --rc genhtml_branch_coverage=1 00:11:50.611 --rc genhtml_function_coverage=1 00:11:50.611 --rc genhtml_legend=1 00:11:50.611 --rc geninfo_all_blocks=1 00:11:50.611 --rc geninfo_unexecuted_blocks=1 00:11:50.611 00:11:50.611 ' 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:50.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.611 --rc genhtml_branch_coverage=1 00:11:50.611 --rc genhtml_function_coverage=1 00:11:50.611 --rc genhtml_legend=1 00:11:50.611 --rc geninfo_all_blocks=1 00:11:50.611 --rc geninfo_unexecuted_blocks=1 00:11:50.611 00:11:50.611 ' 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:50.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:50.611 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:57.181 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:57.181 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:57.181 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:57.182 Found net devices under 0000:86:00.0: cvl_0_0 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:57.182 Found net devices under 0000:86:00.1: cvl_0_1 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:57.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:11:57.182 00:11:57.182 --- 10.0.0.2 ping statistics --- 00:11:57.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.182 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:11:57.182 00:11:57.182 --- 10.0.0.1 ping statistics --- 00:11:57.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.182 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=343877 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 343877 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 343877 ']' 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.182 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:57.182 [2024-10-08 18:19:49.808648] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:11:57.182 [2024-10-08 18:19:49.808697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.182 [2024-10-08 18:19:49.882420] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.182 [2024-10-08 18:19:49.959824] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.182 [2024-10-08 18:19:49.959860] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.182 [2024-10-08 18:19:49.959867] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.182 [2024-10-08 18:19:49.959873] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.182 [2024-10-08 18:19:49.959901] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.182 [2024-10-08 18:19:49.961564] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.182 [2024-10-08 18:19:49.961673] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.182 [2024-10-08 18:19:49.961706] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.182 [2024-10-08 18:19:49.961706] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.472 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:57.472 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:57.472 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:57.472 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:57.472 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:57.472 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.472 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:57.472 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.472 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:57.472 [2024-10-08 18:19:50.682146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.472 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.472 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:57.472 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.472 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:57.472 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.472 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:57.472 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:57.473 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.473 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:57.473 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.473 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:57.473 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.473 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:57.473 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.473 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.473 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.473 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:57.473 [2024-10-08 18:19:50.733685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.473 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.473 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:57.473 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:57.473 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:00.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:14.025 rmmod nvme_tcp 00:12:14.025 rmmod nvme_fabrics 00:12:14.025 rmmod nvme_keyring 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 343877 ']' 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 343877 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 343877 ']' 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 343877 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 343877 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 343877' 00:12:14.025 killing process with pid 343877 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 343877 00:12:14.025 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 343877 00:12:14.025 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:14.025 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:14.025 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:14.025 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:14.025 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:12:14.025 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:14.025 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:12:14.025 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.025 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:14.025 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.025 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.025 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.929 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:15.929 00:12:15.929 real 0m25.766s 00:12:15.929 user 1m10.164s 00:12:15.929 sys 0m5.934s 00:12:15.929 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.929 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:15.929 ************************************ 00:12:15.929 END TEST nvmf_connect_disconnect 00:12:15.929 ************************************ 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:16.189 ************************************ 00:12:16.189 START TEST nvmf_multitarget 00:12:16.189 ************************************ 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:16.189 * Looking for test storage... 00:12:16.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:16.189 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:16.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.190 --rc genhtml_branch_coverage=1 00:12:16.190 --rc genhtml_function_coverage=1 00:12:16.190 --rc genhtml_legend=1 00:12:16.190 --rc geninfo_all_blocks=1 00:12:16.190 --rc geninfo_unexecuted_blocks=1 00:12:16.190 00:12:16.190 ' 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:16.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.190 --rc genhtml_branch_coverage=1 00:12:16.190 --rc genhtml_function_coverage=1 00:12:16.190 --rc genhtml_legend=1 00:12:16.190 --rc geninfo_all_blocks=1 00:12:16.190 --rc geninfo_unexecuted_blocks=1 00:12:16.190 00:12:16.190 ' 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:16.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.190 --rc genhtml_branch_coverage=1 00:12:16.190 --rc genhtml_function_coverage=1 00:12:16.190 --rc genhtml_legend=1 00:12:16.190 --rc geninfo_all_blocks=1 00:12:16.190 --rc geninfo_unexecuted_blocks=1 00:12:16.190 00:12:16.190 ' 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:16.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.190 --rc genhtml_branch_coverage=1 00:12:16.190 --rc genhtml_function_coverage=1 00:12:16.190 --rc genhtml_legend=1 00:12:16.190 --rc geninfo_all_blocks=1 00:12:16.190 --rc geninfo_unexecuted_blocks=1 00:12:16.190 00:12:16.190 ' 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.190 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.450 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:16.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:16.451 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:23.024 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:23.024 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:23.024 Found net devices under 0000:86:00.0: cvl_0_0 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:23.024 Found net devices under 0000:86:00.1: cvl_0_1 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:23.024 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:23.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:12:23.025 00:12:23.025 --- 10.0.0.2 ping statistics --- 00:12:23.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.025 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:12:23.025 00:12:23.025 --- 10.0.0.1 ping statistics --- 00:12:23.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.025 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=350413 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 350413 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 350413 ']' 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:23.025 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:23.025 [2024-10-08 18:20:15.589430] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:12:23.025 [2024-10-08 18:20:15.589480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.025 [2024-10-08 18:20:15.660006] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.025 [2024-10-08 18:20:15.738817] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.025 [2024-10-08 18:20:15.738855] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.025 [2024-10-08 18:20:15.738862] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.025 [2024-10-08 18:20:15.738868] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.025 [2024-10-08 18:20:15.738873] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.025 [2024-10-08 18:20:15.740460] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.025 [2024-10-08 18:20:15.740489] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.025 [2024-10-08 18:20:15.740598] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.025 [2024-10-08 18:20:15.740599] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.284 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:23.284 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:23.285 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:23.285 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:23.285 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:23.285 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.285 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:23.285 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:23.285 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:23.285 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:23.285 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:23.542 "nvmf_tgt_1" 00:12:23.542 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:23.542 "nvmf_tgt_2" 00:12:23.542 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:23.542 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:23.800 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:23.800 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:23.800 true 00:12:23.800 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:23.800 true 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:24.059 rmmod nvme_tcp 00:12:24.059 rmmod nvme_fabrics 00:12:24.059 rmmod nvme_keyring 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 350413 ']' 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 350413 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 350413 ']' 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 350413 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 350413 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 350413' 00:12:24.059 killing process with pid 350413 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 350413 00:12:24.059 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 350413 00:12:24.318 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:24.318 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:24.318 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:24.318 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:24.318 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:12:24.318 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:24.318 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:12:24.318 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:24.318 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:24.318 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.318 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.318 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.851 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:26.851 00:12:26.851 real 0m10.291s 00:12:26.851 user 0m9.779s 00:12:26.851 sys 0m4.942s 00:12:26.851 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:26.851 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:26.851 ************************************ 00:12:26.851 END TEST nvmf_multitarget 00:12:26.851 ************************************ 00:12:26.851 18:20:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:26.851 18:20:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:26.851 18:20:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:26.852 ************************************ 00:12:26.852 START TEST nvmf_rpc 00:12:26.852 ************************************ 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:26.852 * Looking for test storage... 00:12:26.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:26.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.852 --rc genhtml_branch_coverage=1 00:12:26.852 --rc genhtml_function_coverage=1 00:12:26.852 --rc genhtml_legend=1 00:12:26.852 --rc geninfo_all_blocks=1 00:12:26.852 --rc geninfo_unexecuted_blocks=1 00:12:26.852 00:12:26.852 ' 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:26.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.852 --rc genhtml_branch_coverage=1 00:12:26.852 --rc genhtml_function_coverage=1 00:12:26.852 --rc genhtml_legend=1 00:12:26.852 --rc geninfo_all_blocks=1 00:12:26.852 --rc geninfo_unexecuted_blocks=1 00:12:26.852 00:12:26.852 ' 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:26.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.852 --rc genhtml_branch_coverage=1 00:12:26.852 --rc genhtml_function_coverage=1 00:12:26.852 --rc genhtml_legend=1 00:12:26.852 --rc geninfo_all_blocks=1 00:12:26.852 --rc geninfo_unexecuted_blocks=1 00:12:26.852 00:12:26.852 ' 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:26.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.852 --rc genhtml_branch_coverage=1 00:12:26.852 --rc genhtml_function_coverage=1 00:12:26.852 --rc genhtml_legend=1 00:12:26.852 --rc geninfo_all_blocks=1 00:12:26.852 --rc geninfo_unexecuted_blocks=1 00:12:26.852 00:12:26.852 ' 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:26.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:26.852 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:26.853 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:26.853 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.853 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:26.853 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:26.853 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:26.853 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.853 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.853 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.853 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:26.853 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:26.853 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:26.853 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.424 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.424 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:33.424 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:33.424 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:33.424 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:33.425 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:33.425 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:33.425 Found net devices under 0000:86:00.0: cvl_0_0 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:33.425 Found net devices under 0000:86:00.1: cvl_0_1 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:33.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:12:33.425 00:12:33.425 --- 10.0.0.2 ping statistics --- 00:12:33.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.425 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:12:33.425 00:12:33.425 --- 10.0.0.1 ping statistics --- 00:12:33.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.425 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.425 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=354287 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 354287 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 354287 ']' 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.426 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.426 [2024-10-08 18:20:25.941339] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:12:33.426 [2024-10-08 18:20:25.941398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.426 [2024-10-08 18:20:26.014882] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.426 [2024-10-08 18:20:26.088767] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.426 [2024-10-08 18:20:26.088805] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.426 [2024-10-08 18:20:26.088812] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.426 [2024-10-08 18:20:26.088822] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.426 [2024-10-08 18:20:26.088828] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.426 [2024-10-08 18:20:26.090277] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.426 [2024-10-08 18:20:26.090396] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.426 [2024-10-08 18:20:26.090506] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.426 [2024-10-08 18:20:26.090506] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:33.685 "tick_rate": 2100000000, 00:12:33.685 "poll_groups": [ 00:12:33.685 { 00:12:33.685 "name": "nvmf_tgt_poll_group_000", 00:12:33.685 "admin_qpairs": 0, 00:12:33.685 "io_qpairs": 0, 00:12:33.685 "current_admin_qpairs": 0, 00:12:33.685 "current_io_qpairs": 0, 00:12:33.685 "pending_bdev_io": 0, 00:12:33.685 "completed_nvme_io": 0, 00:12:33.685 "transports": [] 00:12:33.685 }, 00:12:33.685 { 00:12:33.685 "name": "nvmf_tgt_poll_group_001", 00:12:33.685 "admin_qpairs": 0, 00:12:33.685 "io_qpairs": 0, 00:12:33.685 "current_admin_qpairs": 0, 00:12:33.685 "current_io_qpairs": 0, 00:12:33.685 "pending_bdev_io": 0, 00:12:33.685 "completed_nvme_io": 0, 00:12:33.685 "transports": [] 00:12:33.685 }, 00:12:33.685 { 00:12:33.685 "name": "nvmf_tgt_poll_group_002", 00:12:33.685 "admin_qpairs": 0, 00:12:33.685 "io_qpairs": 0, 00:12:33.685 "current_admin_qpairs": 0, 00:12:33.685 "current_io_qpairs": 0, 00:12:33.685 "pending_bdev_io": 0, 00:12:33.685 "completed_nvme_io": 0, 00:12:33.685 "transports": [] 00:12:33.685 }, 00:12:33.685 { 00:12:33.685 "name": "nvmf_tgt_poll_group_003", 00:12:33.685 "admin_qpairs": 0, 00:12:33.685 "io_qpairs": 0, 00:12:33.685 "current_admin_qpairs": 0, 00:12:33.685 "current_io_qpairs": 0, 00:12:33.685 "pending_bdev_io": 0, 00:12:33.685 "completed_nvme_io": 0, 00:12:33.685 "transports": [] 00:12:33.685 } 00:12:33.685 ] 00:12:33.685 }' 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.685 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.685 [2024-10-08 18:20:26.939025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.686 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.686 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:33.686 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.686 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.686 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.686 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:33.686 "tick_rate": 2100000000, 00:12:33.686 "poll_groups": [ 00:12:33.686 { 00:12:33.686 "name": "nvmf_tgt_poll_group_000", 00:12:33.686 "admin_qpairs": 0, 00:12:33.686 "io_qpairs": 0, 00:12:33.686 "current_admin_qpairs": 0, 00:12:33.686 "current_io_qpairs": 0, 00:12:33.686 "pending_bdev_io": 0, 00:12:33.686 "completed_nvme_io": 0, 00:12:33.686 "transports": [ 00:12:33.686 { 00:12:33.686 "trtype": "TCP" 00:12:33.686 } 00:12:33.686 ] 00:12:33.686 }, 00:12:33.686 { 00:12:33.686 "name": "nvmf_tgt_poll_group_001", 00:12:33.686 "admin_qpairs": 0, 00:12:33.686 "io_qpairs": 0, 00:12:33.686 "current_admin_qpairs": 0, 00:12:33.686 "current_io_qpairs": 0, 00:12:33.686 "pending_bdev_io": 0, 00:12:33.686 "completed_nvme_io": 0, 00:12:33.686 "transports": [ 00:12:33.686 { 00:12:33.686 "trtype": "TCP" 00:12:33.686 } 00:12:33.686 ] 00:12:33.686 }, 00:12:33.686 { 00:12:33.686 "name": "nvmf_tgt_poll_group_002", 00:12:33.686 "admin_qpairs": 0, 00:12:33.686 "io_qpairs": 0, 00:12:33.686 "current_admin_qpairs": 0, 00:12:33.686 "current_io_qpairs": 0, 00:12:33.686 "pending_bdev_io": 0, 00:12:33.686 "completed_nvme_io": 0, 00:12:33.686 "transports": [ 00:12:33.686 { 00:12:33.686 "trtype": "TCP" 00:12:33.686 } 00:12:33.686 ] 00:12:33.686 }, 00:12:33.686 { 00:12:33.686 "name": "nvmf_tgt_poll_group_003", 00:12:33.686 "admin_qpairs": 0, 00:12:33.686 "io_qpairs": 0, 00:12:33.686 "current_admin_qpairs": 0, 00:12:33.686 "current_io_qpairs": 0, 00:12:33.686 "pending_bdev_io": 0, 00:12:33.686 "completed_nvme_io": 0, 00:12:33.686 "transports": [ 00:12:33.686 { 00:12:33.686 "trtype": "TCP" 00:12:33.686 } 00:12:33.686 ] 00:12:33.686 } 00:12:33.686 ] 00:12:33.686 }' 00:12:33.686 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:33.686 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:33.686 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:33.686 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:33.945 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:33.945 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:33.945 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:33.945 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.946 Malloc1 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.946 [2024-10-08 18:20:27.106902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:33.946 [2024-10-08 18:20:27.135674] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:12:33.946 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:33.946 could not add new controller: failed to write to nvme-fabrics device 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.946 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.363 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.363 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:35.363 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.363 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:35.363 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:37.269 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:37.269 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:37.269 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.269 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:37.269 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.269 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:37.269 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.269 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.269 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:37.269 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.270 [2024-10-08 18:20:30.452470] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:12:37.270 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:37.270 could not add new controller: failed to write to nvme-fabrics device 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.270 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.649 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.649 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:38.649 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.649 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:38.649 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:40.556 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:40.556 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:40.556 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.557 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.557 [2024-10-08 18:20:33.873969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.757 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.757 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:41.757 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.757 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:41.757 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:44.295 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:44.295 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:44.295 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.295 [2024-10-08 18:20:37.139160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.295 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.233 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.233 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:45.233 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.233 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:45.233 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.140 [2024-10-08 18:20:40.438213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.140 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.522 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.522 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:48.522 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.522 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:48.522 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.429 [2024-10-08 18:20:43.694034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.429 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.809 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.809 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:51.809 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.809 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:51.809 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.715 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.716 [2024-10-08 18:20:46.951891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.716 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.095 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.095 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:55.095 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.095 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:55.095 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.005 [2024-10-08 18:20:50.305425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.005 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.265 [2024-10-08 18:20:50.353546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.265 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.265 [2024-10-08 18:20:50.401684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.266 [2024-10-08 18:20:50.449834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.266 [2024-10-08 18:20:50.498005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:57.266 "tick_rate": 2100000000, 00:12:57.266 "poll_groups": [ 00:12:57.266 { 00:12:57.266 "name": "nvmf_tgt_poll_group_000", 00:12:57.266 "admin_qpairs": 2, 00:12:57.266 "io_qpairs": 168, 00:12:57.266 "current_admin_qpairs": 0, 00:12:57.266 "current_io_qpairs": 0, 00:12:57.266 "pending_bdev_io": 0, 00:12:57.266 "completed_nvme_io": 246, 00:12:57.266 "transports": [ 00:12:57.266 { 00:12:57.266 "trtype": "TCP" 00:12:57.266 } 00:12:57.266 ] 00:12:57.266 }, 00:12:57.266 { 00:12:57.266 "name": "nvmf_tgt_poll_group_001", 00:12:57.266 "admin_qpairs": 2, 00:12:57.266 "io_qpairs": 168, 00:12:57.266 "current_admin_qpairs": 0, 00:12:57.266 "current_io_qpairs": 0, 00:12:57.266 "pending_bdev_io": 0, 00:12:57.266 "completed_nvme_io": 267, 00:12:57.266 "transports": [ 00:12:57.266 { 00:12:57.266 "trtype": "TCP" 00:12:57.266 } 00:12:57.266 ] 00:12:57.266 }, 00:12:57.266 { 00:12:57.266 "name": "nvmf_tgt_poll_group_002", 00:12:57.266 "admin_qpairs": 1, 00:12:57.266 "io_qpairs": 168, 00:12:57.266 "current_admin_qpairs": 0, 00:12:57.266 "current_io_qpairs": 0, 00:12:57.266 "pending_bdev_io": 0, 00:12:57.266 "completed_nvme_io": 243, 00:12:57.266 "transports": [ 00:12:57.266 { 00:12:57.266 "trtype": "TCP" 00:12:57.266 } 00:12:57.266 ] 00:12:57.266 }, 00:12:57.266 { 00:12:57.266 "name": "nvmf_tgt_poll_group_003", 00:12:57.266 "admin_qpairs": 2, 00:12:57.266 "io_qpairs": 168, 00:12:57.266 "current_admin_qpairs": 0, 00:12:57.266 "current_io_qpairs": 0, 00:12:57.266 "pending_bdev_io": 0, 00:12:57.266 "completed_nvme_io": 266, 00:12:57.266 "transports": [ 00:12:57.266 { 00:12:57.266 "trtype": "TCP" 00:12:57.266 } 00:12:57.266 ] 00:12:57.266 } 00:12:57.266 ] 00:12:57.266 }' 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:57.266 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.526 rmmod nvme_tcp 00:12:57.526 rmmod nvme_fabrics 00:12:57.526 rmmod nvme_keyring 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 354287 ']' 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 354287 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 354287 ']' 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 354287 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 354287 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 354287' 00:12:57.526 killing process with pid 354287 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 354287 00:12:57.526 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 354287 00:12:57.786 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:57.786 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:57.786 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:57.786 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:57.786 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:12:57.786 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:57.786 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:12:57.786 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.786 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:57.786 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.786 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.786 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.328 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:00.328 00:13:00.328 real 0m33.378s 00:13:00.328 user 1m40.992s 00:13:00.328 sys 0m6.565s 00:13:00.328 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.329 ************************************ 00:13:00.329 END TEST nvmf_rpc 00:13:00.329 ************************************ 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:00.329 ************************************ 00:13:00.329 START TEST nvmf_invalid 00:13:00.329 ************************************ 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:00.329 * Looking for test storage... 00:13:00.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:00.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.329 --rc genhtml_branch_coverage=1 00:13:00.329 --rc genhtml_function_coverage=1 00:13:00.329 --rc genhtml_legend=1 00:13:00.329 --rc geninfo_all_blocks=1 00:13:00.329 --rc geninfo_unexecuted_blocks=1 00:13:00.329 00:13:00.329 ' 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:00.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.329 --rc genhtml_branch_coverage=1 00:13:00.329 --rc genhtml_function_coverage=1 00:13:00.329 --rc genhtml_legend=1 00:13:00.329 --rc geninfo_all_blocks=1 00:13:00.329 --rc geninfo_unexecuted_blocks=1 00:13:00.329 00:13:00.329 ' 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:00.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.329 --rc genhtml_branch_coverage=1 00:13:00.329 --rc genhtml_function_coverage=1 00:13:00.329 --rc genhtml_legend=1 00:13:00.329 --rc geninfo_all_blocks=1 00:13:00.329 --rc geninfo_unexecuted_blocks=1 00:13:00.329 00:13:00.329 ' 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:00.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.329 --rc genhtml_branch_coverage=1 00:13:00.329 --rc genhtml_function_coverage=1 00:13:00.329 --rc genhtml_legend=1 00:13:00.329 --rc geninfo_all_blocks=1 00:13:00.329 --rc geninfo_unexecuted_blocks=1 00:13:00.329 00:13:00.329 ' 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.329 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:00.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:00.330 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:06.906 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:06.906 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:06.906 Found net devices under 0000:86:00.0: cvl_0_0 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:06.906 Found net devices under 0000:86:00.1: cvl_0_1 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:06.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:13:06.906 00:13:06.906 --- 10.0.0.2 ping statistics --- 00:13:06.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.906 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:13:06.906 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:13:06.907 00:13:06.907 --- 10.0.0.1 ping statistics --- 00:13:06.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.907 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=362119 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 362119 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 362119 ']' 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:06.907 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:06.907 [2024-10-08 18:20:59.384102] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:13:06.907 [2024-10-08 18:20:59.384146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.907 [2024-10-08 18:20:59.456193] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.907 [2024-10-08 18:20:59.534543] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.907 [2024-10-08 18:20:59.534580] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.907 [2024-10-08 18:20:59.534587] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.907 [2024-10-08 18:20:59.534593] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.907 [2024-10-08 18:20:59.534598] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.907 [2024-10-08 18:20:59.536157] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.907 [2024-10-08 18:20:59.536265] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.907 [2024-10-08 18:20:59.536369] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.907 [2024-10-08 18:20:59.536370] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.166 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:07.166 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:07.166 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:07.166 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:07.166 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:07.166 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.166 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:07.166 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12132 00:13:07.166 [2024-10-08 18:21:00.453666] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:07.166 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:07.166 { 00:13:07.166 "nqn": "nqn.2016-06.io.spdk:cnode12132", 00:13:07.166 "tgt_name": "foobar", 00:13:07.166 "method": "nvmf_create_subsystem", 00:13:07.166 "req_id": 1 00:13:07.166 } 00:13:07.166 Got JSON-RPC error response 00:13:07.166 response: 00:13:07.166 { 00:13:07.166 "code": -32603, 00:13:07.166 "message": "Unable to find target foobar" 00:13:07.166 }' 00:13:07.166 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:07.166 { 00:13:07.166 "nqn": "nqn.2016-06.io.spdk:cnode12132", 00:13:07.166 "tgt_name": "foobar", 00:13:07.166 "method": "nvmf_create_subsystem", 00:13:07.166 "req_id": 1 00:13:07.166 } 00:13:07.166 Got JSON-RPC error response 00:13:07.166 response: 00:13:07.166 { 00:13:07.166 "code": -32603, 00:13:07.166 "message": "Unable to find target foobar" 00:13:07.166 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:07.426 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:07.426 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19403 00:13:07.426 [2024-10-08 18:21:00.674447] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19403: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:07.426 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:07.426 { 00:13:07.426 "nqn": "nqn.2016-06.io.spdk:cnode19403", 00:13:07.426 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:07.426 "method": "nvmf_create_subsystem", 00:13:07.426 "req_id": 1 00:13:07.426 } 00:13:07.426 Got JSON-RPC error response 00:13:07.426 response: 00:13:07.426 { 00:13:07.426 "code": -32602, 00:13:07.426 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:07.426 }' 00:13:07.426 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:07.426 { 00:13:07.426 "nqn": "nqn.2016-06.io.spdk:cnode19403", 00:13:07.426 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:07.426 "method": "nvmf_create_subsystem", 00:13:07.426 "req_id": 1 00:13:07.426 } 00:13:07.426 Got JSON-RPC error response 00:13:07.426 response: 00:13:07.426 { 00:13:07.426 "code": -32602, 00:13:07.426 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:07.426 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:07.426 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:07.426 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24924 00:13:07.686 [2024-10-08 18:21:00.883097] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24924: invalid model number 'SPDK_Controller' 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:07.686 { 00:13:07.686 "nqn": "nqn.2016-06.io.spdk:cnode24924", 00:13:07.686 "model_number": "SPDK_Controller\u001f", 00:13:07.686 "method": "nvmf_create_subsystem", 00:13:07.686 "req_id": 1 00:13:07.686 } 00:13:07.686 Got JSON-RPC error response 00:13:07.686 response: 00:13:07.686 { 00:13:07.686 "code": -32602, 00:13:07.686 "message": "Invalid MN SPDK_Controller\u001f" 00:13:07.686 }' 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:07.686 { 00:13:07.686 "nqn": "nqn.2016-06.io.spdk:cnode24924", 00:13:07.686 "model_number": "SPDK_Controller\u001f", 00:13:07.686 "method": "nvmf_create_subsystem", 00:13:07.686 "req_id": 1 00:13:07.686 } 00:13:07.686 Got JSON-RPC error response 00:13:07.686 response: 00:13:07.686 { 00:13:07.686 "code": -32602, 00:13:07.686 "message": "Invalid MN SPDK_Controller\u001f" 00:13:07.686 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:07.686 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.687 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:07.687 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:07.687 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:07.687 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.687 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:07.968 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 7 == \- ]] 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '7H+J2~(OiAd[9DWU N=Pd' 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '7H+J2~(OiAd[9DWU N=Pd' nqn.2016-06.io.spdk:cnode29550 00:13:07.969 [2024-10-08 18:21:01.204134] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29550: invalid serial number '7H+J2~(OiAd[9DWU N=Pd' 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:07.969 { 00:13:07.969 "nqn": "nqn.2016-06.io.spdk:cnode29550", 00:13:07.969 "serial_number": "7H+J2~(OiAd[9DWU N=Pd", 00:13:07.969 "method": "nvmf_create_subsystem", 00:13:07.969 "req_id": 1 00:13:07.969 } 00:13:07.969 Got JSON-RPC error response 00:13:07.969 response: 00:13:07.969 { 00:13:07.969 "code": -32602, 00:13:07.969 "message": "Invalid SN 7H+J2~(OiAd[9DWU N=Pd" 00:13:07.969 }' 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:07.969 { 00:13:07.969 "nqn": "nqn.2016-06.io.spdk:cnode29550", 00:13:07.969 "serial_number": "7H+J2~(OiAd[9DWU N=Pd", 00:13:07.969 "method": "nvmf_create_subsystem", 00:13:07.969 "req_id": 1 00:13:07.969 } 00:13:07.969 Got JSON-RPC error response 00:13:07.969 response: 00:13:07.969 { 00:13:07.969 "code": -32602, 00:13:07.969 "message": "Invalid SN 7H+J2~(OiAd[9DWU N=Pd" 00:13:07.969 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.969 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.242 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ % == \- ]] 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '%zgluVtkle<7>-~8pPC:.K{Z$#)0#@p`"?d2G&"/' 00:13:08.243 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '%zgluVtkle<7>-~8pPC:.K{Z$#)0#@p`"?d2G&"/' nqn.2016-06.io.spdk:cnode14029 00:13:08.523 [2024-10-08 18:21:01.669696] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14029: invalid model number '%zgluVtkle<7>-~8pPC:.K{Z$#)0#@p`"?d2G&"/' 00:13:08.523 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:08.523 { 00:13:08.523 "nqn": "nqn.2016-06.io.spdk:cnode14029", 00:13:08.523 "model_number": "%zg\u007fluVtkle<7>-~8pPC:.K{Z$#)0#@p`\"?d2G&\"/", 00:13:08.523 "method": "nvmf_create_subsystem", 00:13:08.523 "req_id": 1 00:13:08.523 } 00:13:08.523 Got JSON-RPC error response 00:13:08.523 response: 00:13:08.523 { 00:13:08.523 "code": -32602, 00:13:08.523 "message": "Invalid MN %zg\u007fluVtkle<7>-~8pPC:.K{Z$#)0#@p`\"?d2G&\"/" 00:13:08.523 }' 00:13:08.523 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:08.524 { 00:13:08.524 "nqn": "nqn.2016-06.io.spdk:cnode14029", 00:13:08.524 "model_number": "%zg\u007fluVtkle<7>-~8pPC:.K{Z$#)0#@p`\"?d2G&\"/", 00:13:08.524 "method": "nvmf_create_subsystem", 00:13:08.524 "req_id": 1 00:13:08.524 } 00:13:08.524 Got JSON-RPC error response 00:13:08.524 response: 00:13:08.524 { 00:13:08.524 "code": -32602, 00:13:08.524 "message": "Invalid MN %zg\u007fluVtkle<7>-~8pPC:.K{Z$#)0#@p`\"?d2G&\"/" 00:13:08.524 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:08.524 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:08.791 [2024-10-08 18:21:01.878446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.791 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:09.051 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:09.051 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:09.051 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:09.051 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:09.051 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:09.051 [2024-10-08 18:21:02.303803] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:09.051 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:09.051 { 00:13:09.051 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:09.051 "listen_address": { 00:13:09.051 "trtype": "tcp", 00:13:09.051 "traddr": "", 00:13:09.051 "trsvcid": "4421" 00:13:09.051 }, 00:13:09.051 "method": "nvmf_subsystem_remove_listener", 00:13:09.051 "req_id": 1 00:13:09.051 } 00:13:09.051 Got JSON-RPC error response 00:13:09.051 response: 00:13:09.051 { 00:13:09.051 "code": -32602, 00:13:09.051 "message": "Invalid parameters" 00:13:09.051 }' 00:13:09.051 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:09.051 { 00:13:09.051 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:09.051 "listen_address": { 00:13:09.051 "trtype": "tcp", 00:13:09.051 "traddr": "", 00:13:09.051 "trsvcid": "4421" 00:13:09.051 }, 00:13:09.051 "method": "nvmf_subsystem_remove_listener", 00:13:09.051 "req_id": 1 00:13:09.051 } 00:13:09.051 Got JSON-RPC error response 00:13:09.051 response: 00:13:09.051 { 00:13:09.051 "code": -32602, 00:13:09.051 "message": "Invalid parameters" 00:13:09.051 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:09.051 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22981 -i 0 00:13:09.310 [2024-10-08 18:21:02.504402] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22981: invalid cntlid range [0-65519] 00:13:09.311 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:09.311 { 00:13:09.311 "nqn": "nqn.2016-06.io.spdk:cnode22981", 00:13:09.311 "min_cntlid": 0, 00:13:09.311 "method": "nvmf_create_subsystem", 00:13:09.311 "req_id": 1 00:13:09.311 } 00:13:09.311 Got JSON-RPC error response 00:13:09.311 response: 00:13:09.311 { 00:13:09.311 "code": -32602, 00:13:09.311 "message": "Invalid cntlid range [0-65519]" 00:13:09.311 }' 00:13:09.311 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:09.311 { 00:13:09.311 "nqn": "nqn.2016-06.io.spdk:cnode22981", 00:13:09.311 "min_cntlid": 0, 00:13:09.311 "method": "nvmf_create_subsystem", 00:13:09.311 "req_id": 1 00:13:09.311 } 00:13:09.311 Got JSON-RPC error response 00:13:09.311 response: 00:13:09.311 { 00:13:09.311 "code": -32602, 00:13:09.311 "message": "Invalid cntlid range [0-65519]" 00:13:09.311 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:09.311 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4656 -i 65520 00:13:09.570 [2024-10-08 18:21:02.701059] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4656: invalid cntlid range [65520-65519] 00:13:09.570 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:09.570 { 00:13:09.570 "nqn": "nqn.2016-06.io.spdk:cnode4656", 00:13:09.570 "min_cntlid": 65520, 00:13:09.570 "method": "nvmf_create_subsystem", 00:13:09.570 "req_id": 1 00:13:09.570 } 00:13:09.570 Got JSON-RPC error response 00:13:09.570 response: 00:13:09.570 { 00:13:09.570 "code": -32602, 00:13:09.570 "message": "Invalid cntlid range [65520-65519]" 00:13:09.570 }' 00:13:09.570 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:09.570 { 00:13:09.570 "nqn": "nqn.2016-06.io.spdk:cnode4656", 00:13:09.570 "min_cntlid": 65520, 00:13:09.570 "method": "nvmf_create_subsystem", 00:13:09.570 "req_id": 1 00:13:09.570 } 00:13:09.570 Got JSON-RPC error response 00:13:09.570 response: 00:13:09.570 { 00:13:09.570 "code": -32602, 00:13:09.570 "message": "Invalid cntlid range [65520-65519]" 00:13:09.570 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:09.570 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15567 -I 0 00:13:09.830 [2024-10-08 18:21:02.897682] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15567: invalid cntlid range [1-0] 00:13:09.830 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:09.830 { 00:13:09.830 "nqn": "nqn.2016-06.io.spdk:cnode15567", 00:13:09.830 "max_cntlid": 0, 00:13:09.830 "method": "nvmf_create_subsystem", 00:13:09.830 "req_id": 1 00:13:09.830 } 00:13:09.830 Got JSON-RPC error response 00:13:09.830 response: 00:13:09.830 { 00:13:09.830 "code": -32602, 00:13:09.830 "message": "Invalid cntlid range [1-0]" 00:13:09.830 }' 00:13:09.830 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:09.830 { 00:13:09.830 "nqn": "nqn.2016-06.io.spdk:cnode15567", 00:13:09.830 "max_cntlid": 0, 00:13:09.830 "method": "nvmf_create_subsystem", 00:13:09.830 "req_id": 1 00:13:09.830 } 00:13:09.830 Got JSON-RPC error response 00:13:09.830 response: 00:13:09.830 { 00:13:09.830 "code": -32602, 00:13:09.830 "message": "Invalid cntlid range [1-0]" 00:13:09.830 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:09.830 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17414 -I 65520 00:13:09.830 [2024-10-08 18:21:03.094312] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17414: invalid cntlid range [1-65520] 00:13:09.830 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:09.830 { 00:13:09.830 "nqn": "nqn.2016-06.io.spdk:cnode17414", 00:13:09.830 "max_cntlid": 65520, 00:13:09.830 "method": "nvmf_create_subsystem", 00:13:09.830 "req_id": 1 00:13:09.830 } 00:13:09.830 Got JSON-RPC error response 00:13:09.830 response: 00:13:09.830 { 00:13:09.830 "code": -32602, 00:13:09.830 "message": "Invalid cntlid range [1-65520]" 00:13:09.830 }' 00:13:09.830 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:09.830 { 00:13:09.830 "nqn": "nqn.2016-06.io.spdk:cnode17414", 00:13:09.830 "max_cntlid": 65520, 00:13:09.830 "method": "nvmf_create_subsystem", 00:13:09.830 "req_id": 1 00:13:09.830 } 00:13:09.830 Got JSON-RPC error response 00:13:09.830 response: 00:13:09.830 { 00:13:09.830 "code": -32602, 00:13:09.830 "message": "Invalid cntlid range [1-65520]" 00:13:09.830 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:09.830 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26435 -i 6 -I 5 00:13:10.089 [2024-10-08 18:21:03.311052] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26435: invalid cntlid range [6-5] 00:13:10.089 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:10.089 { 00:13:10.089 "nqn": "nqn.2016-06.io.spdk:cnode26435", 00:13:10.089 "min_cntlid": 6, 00:13:10.089 "max_cntlid": 5, 00:13:10.089 "method": "nvmf_create_subsystem", 00:13:10.089 "req_id": 1 00:13:10.089 } 00:13:10.089 Got JSON-RPC error response 00:13:10.089 response: 00:13:10.089 { 00:13:10.089 "code": -32602, 00:13:10.089 "message": "Invalid cntlid range [6-5]" 00:13:10.089 }' 00:13:10.089 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:10.089 { 00:13:10.089 "nqn": "nqn.2016-06.io.spdk:cnode26435", 00:13:10.090 "min_cntlid": 6, 00:13:10.090 "max_cntlid": 5, 00:13:10.090 "method": "nvmf_create_subsystem", 00:13:10.090 "req_id": 1 00:13:10.090 } 00:13:10.090 Got JSON-RPC error response 00:13:10.090 response: 00:13:10.090 { 00:13:10.090 "code": -32602, 00:13:10.090 "message": "Invalid cntlid range [6-5]" 00:13:10.090 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.090 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:10.348 { 00:13:10.348 "name": "foobar", 00:13:10.348 "method": "nvmf_delete_target", 00:13:10.348 "req_id": 1 00:13:10.348 } 00:13:10.348 Got JSON-RPC error response 00:13:10.348 response: 00:13:10.348 { 00:13:10.348 "code": -32602, 00:13:10.348 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:10.348 }' 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:10.348 { 00:13:10.348 "name": "foobar", 00:13:10.348 "method": "nvmf_delete_target", 00:13:10.348 "req_id": 1 00:13:10.348 } 00:13:10.348 Got JSON-RPC error response 00:13:10.348 response: 00:13:10.348 { 00:13:10.348 "code": -32602, 00:13:10.348 "message": "The specified target doesn't exist, cannot delete it." 00:13:10.348 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:10.348 rmmod nvme_tcp 00:13:10.348 rmmod nvme_fabrics 00:13:10.348 rmmod nvme_keyring 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 362119 ']' 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 362119 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 362119 ']' 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 362119 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 362119 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 362119' 00:13:10.348 killing process with pid 362119 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 362119 00:13:10.348 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 362119 00:13:10.607 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:10.608 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:10.608 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:10.608 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:10.608 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:13:10.608 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:10.608 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:13:10.608 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:10.608 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:10.608 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.608 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.608 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.515 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:12.515 00:13:12.515 real 0m12.697s 00:13:12.515 user 0m21.141s 00:13:12.515 sys 0m5.506s 00:13:12.515 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:12.515 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:12.515 ************************************ 00:13:12.515 END TEST nvmf_invalid 00:13:12.515 ************************************ 00:13:12.775 18:21:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:12.775 18:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:12.775 18:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:12.775 18:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:12.775 ************************************ 00:13:12.775 START TEST nvmf_connect_stress 00:13:12.775 ************************************ 00:13:12.775 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:12.775 * Looking for test storage... 00:13:12.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:12.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.775 --rc genhtml_branch_coverage=1 00:13:12.775 --rc genhtml_function_coverage=1 00:13:12.775 --rc genhtml_legend=1 00:13:12.775 --rc geninfo_all_blocks=1 00:13:12.775 --rc geninfo_unexecuted_blocks=1 00:13:12.775 00:13:12.775 ' 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:12.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.775 --rc genhtml_branch_coverage=1 00:13:12.775 --rc genhtml_function_coverage=1 00:13:12.775 --rc genhtml_legend=1 00:13:12.775 --rc geninfo_all_blocks=1 00:13:12.775 --rc geninfo_unexecuted_blocks=1 00:13:12.775 00:13:12.775 ' 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:12.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.775 --rc genhtml_branch_coverage=1 00:13:12.775 --rc genhtml_function_coverage=1 00:13:12.775 --rc genhtml_legend=1 00:13:12.775 --rc geninfo_all_blocks=1 00:13:12.775 --rc geninfo_unexecuted_blocks=1 00:13:12.775 00:13:12.775 ' 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:12.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.775 --rc genhtml_branch_coverage=1 00:13:12.775 --rc genhtml_function_coverage=1 00:13:12.775 --rc genhtml_legend=1 00:13:12.775 --rc geninfo_all_blocks=1 00:13:12.775 --rc geninfo_unexecuted_blocks=1 00:13:12.775 00:13:12.775 ' 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.775 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:13.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.035 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:13.036 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:13.036 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:13.036 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:19.609 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:19.609 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:19.609 Found net devices under 0000:86:00.0: cvl_0_0 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.609 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:19.610 Found net devices under 0000:86:00.1: cvl_0_1 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:19.610 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:19.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:13:19.610 00:13:19.610 --- 10.0.0.2 ping statistics --- 00:13:19.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.610 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:13:19.610 00:13:19.610 --- 10.0.0.1 ping statistics --- 00:13:19.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.610 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=367026 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 367026 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 367026 ']' 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.610 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.610 [2024-10-08 18:21:12.206069] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:13:19.610 [2024-10-08 18:21:12.206113] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.610 [2024-10-08 18:21:12.277122] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:19.610 [2024-10-08 18:21:12.348056] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.610 [2024-10-08 18:21:12.348097] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.610 [2024-10-08 18:21:12.348104] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.610 [2024-10-08 18:21:12.348110] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.610 [2024-10-08 18:21:12.348115] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.610 [2024-10-08 18:21:12.349107] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.610 [2024-10-08 18:21:12.349216] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.610 [2024-10-08 18:21:12.349217] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.870 [2024-10-08 18:21:13.079080] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.870 [2024-10-08 18:21:13.110748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.870 NULL1 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=367152 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.870 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.871 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.871 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.871 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.871 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.871 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.871 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.871 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.871 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.871 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.871 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.871 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.871 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.871 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.871 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.871 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:19.871 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.871 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.130 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.130 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.130 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.130 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.130 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.130 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.130 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.130 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.130 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.130 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.130 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.130 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.130 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:20.130 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.130 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.130 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.389 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.389 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:20.389 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.389 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.389 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.648 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.648 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:20.648 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.648 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.648 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.907 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.907 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:20.907 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.907 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.907 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.474 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.474 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:21.474 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.474 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.474 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.733 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.733 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:21.733 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.733 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.733 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.992 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.992 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:21.992 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.992 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.992 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.252 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.252 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:22.252 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.252 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.252 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.510 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.510 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:22.510 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.510 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.510 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.078 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.078 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:23.078 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.078 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.078 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.337 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.337 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:23.337 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.337 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.337 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.596 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.596 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:23.596 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.596 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.596 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.855 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.855 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:23.855 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.855 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.855 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.423 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.423 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:24.423 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.423 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.423 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.682 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.682 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:24.682 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.682 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.682 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.940 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.940 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:24.940 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.940 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.940 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.200 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.200 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:25.200 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.200 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.200 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.459 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.459 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:25.459 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.459 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.459 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.027 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.027 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:26.027 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.027 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.027 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.287 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.287 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:26.287 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.287 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.287 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.546 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.546 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:26.546 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.546 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.546 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.805 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.805 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:26.805 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.805 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.805 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:27.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.633 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.633 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:27.633 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.633 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.633 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.891 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.891 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:27.891 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.891 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.891 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.150 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.150 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:28.150 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.150 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.150 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.409 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.409 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:28.409 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.409 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.409 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.977 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.977 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:28.977 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.977 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.977 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.237 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.237 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:29.237 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.237 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.237 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.496 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.496 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:29.496 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.496 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.496 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.755 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.755 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:29.755 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.755 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.755 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.014 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:30.014 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.014 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 367152 00:13:30.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (367152) - No such process 00:13:30.014 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 367152 00:13:30.014 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:30.014 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:30.014 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:30.014 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:30.014 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:30.014 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:30.014 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:30.014 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:30.014 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:30.014 rmmod nvme_tcp 00:13:30.274 rmmod nvme_fabrics 00:13:30.274 rmmod nvme_keyring 00:13:30.274 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:30.274 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:30.274 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:30.274 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 367026 ']' 00:13:30.274 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 367026 00:13:30.274 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 367026 ']' 00:13:30.274 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 367026 00:13:30.274 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:30.274 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:30.274 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 367026 00:13:30.274 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:30.274 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:30.274 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 367026' 00:13:30.274 killing process with pid 367026 00:13:30.274 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 367026 00:13:30.274 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 367026 00:13:30.533 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:30.533 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:30.533 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:30.533 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:30.533 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:13:30.533 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:30.533 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:13:30.533 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:30.533 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:30.533 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.533 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.533 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.439 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:32.439 00:13:32.439 real 0m19.796s 00:13:32.439 user 0m41.443s 00:13:32.439 sys 0m8.633s 00:13:32.439 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.439 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.439 ************************************ 00:13:32.439 END TEST nvmf_connect_stress 00:13:32.439 ************************************ 00:13:32.439 18:21:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:32.439 18:21:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:32.439 18:21:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.439 18:21:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:32.699 ************************************ 00:13:32.699 START TEST nvmf_fused_ordering 00:13:32.699 ************************************ 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:32.699 * Looking for test storage... 00:13:32.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:32.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.699 --rc genhtml_branch_coverage=1 00:13:32.699 --rc genhtml_function_coverage=1 00:13:32.699 --rc genhtml_legend=1 00:13:32.699 --rc geninfo_all_blocks=1 00:13:32.699 --rc geninfo_unexecuted_blocks=1 00:13:32.699 00:13:32.699 ' 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:32.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.699 --rc genhtml_branch_coverage=1 00:13:32.699 --rc genhtml_function_coverage=1 00:13:32.699 --rc genhtml_legend=1 00:13:32.699 --rc geninfo_all_blocks=1 00:13:32.699 --rc geninfo_unexecuted_blocks=1 00:13:32.699 00:13:32.699 ' 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:32.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.699 --rc genhtml_branch_coverage=1 00:13:32.699 --rc genhtml_function_coverage=1 00:13:32.699 --rc genhtml_legend=1 00:13:32.699 --rc geninfo_all_blocks=1 00:13:32.699 --rc geninfo_unexecuted_blocks=1 00:13:32.699 00:13:32.699 ' 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:32.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.699 --rc genhtml_branch_coverage=1 00:13:32.699 --rc genhtml_function_coverage=1 00:13:32.699 --rc genhtml_legend=1 00:13:32.699 --rc geninfo_all_blocks=1 00:13:32.699 --rc geninfo_unexecuted_blocks=1 00:13:32.699 00:13:32.699 ' 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:32.699 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:32.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:32.700 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.270 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:39.270 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:39.270 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:39.270 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:39.270 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:39.270 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:39.270 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:39.270 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:39.270 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:39.270 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:39.270 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:39.270 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:39.270 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:39.271 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:39.271 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:39.271 Found net devices under 0000:86:00.0: cvl_0_0 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:39.271 Found net devices under 0000:86:00.1: cvl_0_1 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:39.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:13:39.271 00:13:39.271 --- 10.0.0.2 ping statistics --- 00:13:39.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.271 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:39.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:13:39.271 00:13:39.271 --- 10.0.0.1 ping statistics --- 00:13:39.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.271 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=372422 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:39.271 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 372422 00:13:39.272 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 372422 ']' 00:13:39.272 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.272 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:39.272 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.272 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:39.272 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.272 [2024-10-08 18:21:32.034441] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:13:39.272 [2024-10-08 18:21:32.034486] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.272 [2024-10-08 18:21:32.106961] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.272 [2024-10-08 18:21:32.181875] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.272 [2024-10-08 18:21:32.181915] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.272 [2024-10-08 18:21:32.181922] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.272 [2024-10-08 18:21:32.181928] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.272 [2024-10-08 18:21:32.181934] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.272 [2024-10-08 18:21:32.182466] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.840 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.840 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:39.840 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:39.840 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.841 [2024-10-08 18:21:32.902772] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.841 [2024-10-08 18:21:32.922961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.841 NULL1 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.841 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:39.841 [2024-10-08 18:21:32.978805] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:13:39.841 [2024-10-08 18:21:32.978848] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid372587 ] 00:13:40.100 Attached to nqn.2016-06.io.spdk:cnode1 00:13:40.100 Namespace ID: 1 size: 1GB 00:13:40.100 fused_ordering(0) 00:13:40.100 fused_ordering(1) 00:13:40.100 fused_ordering(2) 00:13:40.100 fused_ordering(3) 00:13:40.100 fused_ordering(4) 00:13:40.100 fused_ordering(5) 00:13:40.100 fused_ordering(6) 00:13:40.100 fused_ordering(7) 00:13:40.100 fused_ordering(8) 00:13:40.100 fused_ordering(9) 00:13:40.100 fused_ordering(10) 00:13:40.100 fused_ordering(11) 00:13:40.100 fused_ordering(12) 00:13:40.100 fused_ordering(13) 00:13:40.100 fused_ordering(14) 00:13:40.100 fused_ordering(15) 00:13:40.100 fused_ordering(16) 00:13:40.100 fused_ordering(17) 00:13:40.100 fused_ordering(18) 00:13:40.100 fused_ordering(19) 00:13:40.100 fused_ordering(20) 00:13:40.100 fused_ordering(21) 00:13:40.100 fused_ordering(22) 00:13:40.100 fused_ordering(23) 00:13:40.100 fused_ordering(24) 00:13:40.100 fused_ordering(25) 00:13:40.100 fused_ordering(26) 00:13:40.100 fused_ordering(27) 00:13:40.100 fused_ordering(28) 00:13:40.100 fused_ordering(29) 00:13:40.100 fused_ordering(30) 00:13:40.100 fused_ordering(31) 00:13:40.100 fused_ordering(32) 00:13:40.100 fused_ordering(33) 00:13:40.100 fused_ordering(34) 00:13:40.100 fused_ordering(35) 00:13:40.100 fused_ordering(36) 00:13:40.100 fused_ordering(37) 00:13:40.100 fused_ordering(38) 00:13:40.100 fused_ordering(39) 00:13:40.100 fused_ordering(40) 00:13:40.100 fused_ordering(41) 00:13:40.100 fused_ordering(42) 00:13:40.100 fused_ordering(43) 00:13:40.100 fused_ordering(44) 00:13:40.100 fused_ordering(45) 00:13:40.100 fused_ordering(46) 00:13:40.100 fused_ordering(47) 00:13:40.100 fused_ordering(48) 00:13:40.100 fused_ordering(49) 00:13:40.100 fused_ordering(50) 00:13:40.100 fused_ordering(51) 00:13:40.100 fused_ordering(52) 00:13:40.101 fused_ordering(53) 00:13:40.101 fused_ordering(54) 00:13:40.101 fused_ordering(55) 00:13:40.101 fused_ordering(56) 00:13:40.101 fused_ordering(57) 00:13:40.101 fused_ordering(58) 00:13:40.101 fused_ordering(59) 00:13:40.101 fused_ordering(60) 00:13:40.101 fused_ordering(61) 00:13:40.101 fused_ordering(62) 00:13:40.101 fused_ordering(63) 00:13:40.101 fused_ordering(64) 00:13:40.101 fused_ordering(65) 00:13:40.101 fused_ordering(66) 00:13:40.101 fused_ordering(67) 00:13:40.101 fused_ordering(68) 00:13:40.101 fused_ordering(69) 00:13:40.101 fused_ordering(70) 00:13:40.101 fused_ordering(71) 00:13:40.101 fused_ordering(72) 00:13:40.101 fused_ordering(73) 00:13:40.101 fused_ordering(74) 00:13:40.101 fused_ordering(75) 00:13:40.101 fused_ordering(76) 00:13:40.101 fused_ordering(77) 00:13:40.101 fused_ordering(78) 00:13:40.101 fused_ordering(79) 00:13:40.101 fused_ordering(80) 00:13:40.101 fused_ordering(81) 00:13:40.101 fused_ordering(82) 00:13:40.101 fused_ordering(83) 00:13:40.101 fused_ordering(84) 00:13:40.101 fused_ordering(85) 00:13:40.101 fused_ordering(86) 00:13:40.101 fused_ordering(87) 00:13:40.101 fused_ordering(88) 00:13:40.101 fused_ordering(89) 00:13:40.101 fused_ordering(90) 00:13:40.101 fused_ordering(91) 00:13:40.101 fused_ordering(92) 00:13:40.101 fused_ordering(93) 00:13:40.101 fused_ordering(94) 00:13:40.101 fused_ordering(95) 00:13:40.101 fused_ordering(96) 00:13:40.101 fused_ordering(97) 00:13:40.101 fused_ordering(98) 00:13:40.101 fused_ordering(99) 00:13:40.101 fused_ordering(100) 00:13:40.101 fused_ordering(101) 00:13:40.101 fused_ordering(102) 00:13:40.101 fused_ordering(103) 00:13:40.101 fused_ordering(104) 00:13:40.101 fused_ordering(105) 00:13:40.101 fused_ordering(106) 00:13:40.101 fused_ordering(107) 00:13:40.101 fused_ordering(108) 00:13:40.101 fused_ordering(109) 00:13:40.101 fused_ordering(110) 00:13:40.101 fused_ordering(111) 00:13:40.101 fused_ordering(112) 00:13:40.101 fused_ordering(113) 00:13:40.101 fused_ordering(114) 00:13:40.101 fused_ordering(115) 00:13:40.101 fused_ordering(116) 00:13:40.101 fused_ordering(117) 00:13:40.101 fused_ordering(118) 00:13:40.101 fused_ordering(119) 00:13:40.101 fused_ordering(120) 00:13:40.101 fused_ordering(121) 00:13:40.101 fused_ordering(122) 00:13:40.101 fused_ordering(123) 00:13:40.101 fused_ordering(124) 00:13:40.101 fused_ordering(125) 00:13:40.101 fused_ordering(126) 00:13:40.101 fused_ordering(127) 00:13:40.101 fused_ordering(128) 00:13:40.101 fused_ordering(129) 00:13:40.101 fused_ordering(130) 00:13:40.101 fused_ordering(131) 00:13:40.101 fused_ordering(132) 00:13:40.101 fused_ordering(133) 00:13:40.101 fused_ordering(134) 00:13:40.101 fused_ordering(135) 00:13:40.101 fused_ordering(136) 00:13:40.101 fused_ordering(137) 00:13:40.101 fused_ordering(138) 00:13:40.101 fused_ordering(139) 00:13:40.101 fused_ordering(140) 00:13:40.101 fused_ordering(141) 00:13:40.101 fused_ordering(142) 00:13:40.101 fused_ordering(143) 00:13:40.101 fused_ordering(144) 00:13:40.101 fused_ordering(145) 00:13:40.101 fused_ordering(146) 00:13:40.101 fused_ordering(147) 00:13:40.101 fused_ordering(148) 00:13:40.101 fused_ordering(149) 00:13:40.101 fused_ordering(150) 00:13:40.101 fused_ordering(151) 00:13:40.101 fused_ordering(152) 00:13:40.101 fused_ordering(153) 00:13:40.101 fused_ordering(154) 00:13:40.101 fused_ordering(155) 00:13:40.101 fused_ordering(156) 00:13:40.101 fused_ordering(157) 00:13:40.101 fused_ordering(158) 00:13:40.101 fused_ordering(159) 00:13:40.101 fused_ordering(160) 00:13:40.101 fused_ordering(161) 00:13:40.101 fused_ordering(162) 00:13:40.101 fused_ordering(163) 00:13:40.101 fused_ordering(164) 00:13:40.101 fused_ordering(165) 00:13:40.101 fused_ordering(166) 00:13:40.101 fused_ordering(167) 00:13:40.101 fused_ordering(168) 00:13:40.101 fused_ordering(169) 00:13:40.101 fused_ordering(170) 00:13:40.101 fused_ordering(171) 00:13:40.101 fused_ordering(172) 00:13:40.101 fused_ordering(173) 00:13:40.101 fused_ordering(174) 00:13:40.101 fused_ordering(175) 00:13:40.101 fused_ordering(176) 00:13:40.101 fused_ordering(177) 00:13:40.101 fused_ordering(178) 00:13:40.101 fused_ordering(179) 00:13:40.101 fused_ordering(180) 00:13:40.101 fused_ordering(181) 00:13:40.101 fused_ordering(182) 00:13:40.101 fused_ordering(183) 00:13:40.101 fused_ordering(184) 00:13:40.101 fused_ordering(185) 00:13:40.101 fused_ordering(186) 00:13:40.101 fused_ordering(187) 00:13:40.101 fused_ordering(188) 00:13:40.101 fused_ordering(189) 00:13:40.101 fused_ordering(190) 00:13:40.101 fused_ordering(191) 00:13:40.101 fused_ordering(192) 00:13:40.101 fused_ordering(193) 00:13:40.101 fused_ordering(194) 00:13:40.101 fused_ordering(195) 00:13:40.101 fused_ordering(196) 00:13:40.101 fused_ordering(197) 00:13:40.101 fused_ordering(198) 00:13:40.101 fused_ordering(199) 00:13:40.101 fused_ordering(200) 00:13:40.101 fused_ordering(201) 00:13:40.101 fused_ordering(202) 00:13:40.101 fused_ordering(203) 00:13:40.101 fused_ordering(204) 00:13:40.101 fused_ordering(205) 00:13:40.360 fused_ordering(206) 00:13:40.360 fused_ordering(207) 00:13:40.360 fused_ordering(208) 00:13:40.360 fused_ordering(209) 00:13:40.360 fused_ordering(210) 00:13:40.360 fused_ordering(211) 00:13:40.360 fused_ordering(212) 00:13:40.360 fused_ordering(213) 00:13:40.360 fused_ordering(214) 00:13:40.360 fused_ordering(215) 00:13:40.360 fused_ordering(216) 00:13:40.360 fused_ordering(217) 00:13:40.360 fused_ordering(218) 00:13:40.360 fused_ordering(219) 00:13:40.360 fused_ordering(220) 00:13:40.360 fused_ordering(221) 00:13:40.360 fused_ordering(222) 00:13:40.360 fused_ordering(223) 00:13:40.360 fused_ordering(224) 00:13:40.360 fused_ordering(225) 00:13:40.360 fused_ordering(226) 00:13:40.360 fused_ordering(227) 00:13:40.360 fused_ordering(228) 00:13:40.360 fused_ordering(229) 00:13:40.360 fused_ordering(230) 00:13:40.360 fused_ordering(231) 00:13:40.360 fused_ordering(232) 00:13:40.360 fused_ordering(233) 00:13:40.360 fused_ordering(234) 00:13:40.360 fused_ordering(235) 00:13:40.360 fused_ordering(236) 00:13:40.360 fused_ordering(237) 00:13:40.360 fused_ordering(238) 00:13:40.360 fused_ordering(239) 00:13:40.360 fused_ordering(240) 00:13:40.360 fused_ordering(241) 00:13:40.360 fused_ordering(242) 00:13:40.360 fused_ordering(243) 00:13:40.360 fused_ordering(244) 00:13:40.360 fused_ordering(245) 00:13:40.360 fused_ordering(246) 00:13:40.360 fused_ordering(247) 00:13:40.360 fused_ordering(248) 00:13:40.360 fused_ordering(249) 00:13:40.360 fused_ordering(250) 00:13:40.360 fused_ordering(251) 00:13:40.360 fused_ordering(252) 00:13:40.360 fused_ordering(253) 00:13:40.360 fused_ordering(254) 00:13:40.360 fused_ordering(255) 00:13:40.360 fused_ordering(256) 00:13:40.360 fused_ordering(257) 00:13:40.360 fused_ordering(258) 00:13:40.360 fused_ordering(259) 00:13:40.360 fused_ordering(260) 00:13:40.360 fused_ordering(261) 00:13:40.360 fused_ordering(262) 00:13:40.361 fused_ordering(263) 00:13:40.361 fused_ordering(264) 00:13:40.361 fused_ordering(265) 00:13:40.361 fused_ordering(266) 00:13:40.361 fused_ordering(267) 00:13:40.361 fused_ordering(268) 00:13:40.361 fused_ordering(269) 00:13:40.361 fused_ordering(270) 00:13:40.361 fused_ordering(271) 00:13:40.361 fused_ordering(272) 00:13:40.361 fused_ordering(273) 00:13:40.361 fused_ordering(274) 00:13:40.361 fused_ordering(275) 00:13:40.361 fused_ordering(276) 00:13:40.361 fused_ordering(277) 00:13:40.361 fused_ordering(278) 00:13:40.361 fused_ordering(279) 00:13:40.361 fused_ordering(280) 00:13:40.361 fused_ordering(281) 00:13:40.361 fused_ordering(282) 00:13:40.361 fused_ordering(283) 00:13:40.361 fused_ordering(284) 00:13:40.361 fused_ordering(285) 00:13:40.361 fused_ordering(286) 00:13:40.361 fused_ordering(287) 00:13:40.361 fused_ordering(288) 00:13:40.361 fused_ordering(289) 00:13:40.361 fused_ordering(290) 00:13:40.361 fused_ordering(291) 00:13:40.361 fused_ordering(292) 00:13:40.361 fused_ordering(293) 00:13:40.361 fused_ordering(294) 00:13:40.361 fused_ordering(295) 00:13:40.361 fused_ordering(296) 00:13:40.361 fused_ordering(297) 00:13:40.361 fused_ordering(298) 00:13:40.361 fused_ordering(299) 00:13:40.361 fused_ordering(300) 00:13:40.361 fused_ordering(301) 00:13:40.361 fused_ordering(302) 00:13:40.361 fused_ordering(303) 00:13:40.361 fused_ordering(304) 00:13:40.361 fused_ordering(305) 00:13:40.361 fused_ordering(306) 00:13:40.361 fused_ordering(307) 00:13:40.361 fused_ordering(308) 00:13:40.361 fused_ordering(309) 00:13:40.361 fused_ordering(310) 00:13:40.361 fused_ordering(311) 00:13:40.361 fused_ordering(312) 00:13:40.361 fused_ordering(313) 00:13:40.361 fused_ordering(314) 00:13:40.361 fused_ordering(315) 00:13:40.361 fused_ordering(316) 00:13:40.361 fused_ordering(317) 00:13:40.361 fused_ordering(318) 00:13:40.361 fused_ordering(319) 00:13:40.361 fused_ordering(320) 00:13:40.361 fused_ordering(321) 00:13:40.361 fused_ordering(322) 00:13:40.361 fused_ordering(323) 00:13:40.361 fused_ordering(324) 00:13:40.361 fused_ordering(325) 00:13:40.361 fused_ordering(326) 00:13:40.361 fused_ordering(327) 00:13:40.361 fused_ordering(328) 00:13:40.361 fused_ordering(329) 00:13:40.361 fused_ordering(330) 00:13:40.361 fused_ordering(331) 00:13:40.361 fused_ordering(332) 00:13:40.361 fused_ordering(333) 00:13:40.361 fused_ordering(334) 00:13:40.361 fused_ordering(335) 00:13:40.361 fused_ordering(336) 00:13:40.361 fused_ordering(337) 00:13:40.361 fused_ordering(338) 00:13:40.361 fused_ordering(339) 00:13:40.361 fused_ordering(340) 00:13:40.361 fused_ordering(341) 00:13:40.361 fused_ordering(342) 00:13:40.361 fused_ordering(343) 00:13:40.361 fused_ordering(344) 00:13:40.361 fused_ordering(345) 00:13:40.361 fused_ordering(346) 00:13:40.361 fused_ordering(347) 00:13:40.361 fused_ordering(348) 00:13:40.361 fused_ordering(349) 00:13:40.361 fused_ordering(350) 00:13:40.361 fused_ordering(351) 00:13:40.361 fused_ordering(352) 00:13:40.361 fused_ordering(353) 00:13:40.361 fused_ordering(354) 00:13:40.361 fused_ordering(355) 00:13:40.361 fused_ordering(356) 00:13:40.361 fused_ordering(357) 00:13:40.361 fused_ordering(358) 00:13:40.361 fused_ordering(359) 00:13:40.361 fused_ordering(360) 00:13:40.361 fused_ordering(361) 00:13:40.361 fused_ordering(362) 00:13:40.361 fused_ordering(363) 00:13:40.361 fused_ordering(364) 00:13:40.361 fused_ordering(365) 00:13:40.361 fused_ordering(366) 00:13:40.361 fused_ordering(367) 00:13:40.361 fused_ordering(368) 00:13:40.361 fused_ordering(369) 00:13:40.361 fused_ordering(370) 00:13:40.361 fused_ordering(371) 00:13:40.361 fused_ordering(372) 00:13:40.361 fused_ordering(373) 00:13:40.361 fused_ordering(374) 00:13:40.361 fused_ordering(375) 00:13:40.361 fused_ordering(376) 00:13:40.361 fused_ordering(377) 00:13:40.361 fused_ordering(378) 00:13:40.361 fused_ordering(379) 00:13:40.361 fused_ordering(380) 00:13:40.361 fused_ordering(381) 00:13:40.361 fused_ordering(382) 00:13:40.361 fused_ordering(383) 00:13:40.361 fused_ordering(384) 00:13:40.361 fused_ordering(385) 00:13:40.361 fused_ordering(386) 00:13:40.361 fused_ordering(387) 00:13:40.361 fused_ordering(388) 00:13:40.361 fused_ordering(389) 00:13:40.361 fused_ordering(390) 00:13:40.361 fused_ordering(391) 00:13:40.361 fused_ordering(392) 00:13:40.361 fused_ordering(393) 00:13:40.361 fused_ordering(394) 00:13:40.361 fused_ordering(395) 00:13:40.361 fused_ordering(396) 00:13:40.361 fused_ordering(397) 00:13:40.361 fused_ordering(398) 00:13:40.361 fused_ordering(399) 00:13:40.361 fused_ordering(400) 00:13:40.361 fused_ordering(401) 00:13:40.361 fused_ordering(402) 00:13:40.361 fused_ordering(403) 00:13:40.361 fused_ordering(404) 00:13:40.361 fused_ordering(405) 00:13:40.361 fused_ordering(406) 00:13:40.361 fused_ordering(407) 00:13:40.361 fused_ordering(408) 00:13:40.361 fused_ordering(409) 00:13:40.361 fused_ordering(410) 00:13:40.620 fused_ordering(411) 00:13:40.620 fused_ordering(412) 00:13:40.620 fused_ordering(413) 00:13:40.620 fused_ordering(414) 00:13:40.620 fused_ordering(415) 00:13:40.620 fused_ordering(416) 00:13:40.620 fused_ordering(417) 00:13:40.620 fused_ordering(418) 00:13:40.620 fused_ordering(419) 00:13:40.620 fused_ordering(420) 00:13:40.620 fused_ordering(421) 00:13:40.620 fused_ordering(422) 00:13:40.620 fused_ordering(423) 00:13:40.620 fused_ordering(424) 00:13:40.620 fused_ordering(425) 00:13:40.620 fused_ordering(426) 00:13:40.620 fused_ordering(427) 00:13:40.620 fused_ordering(428) 00:13:40.620 fused_ordering(429) 00:13:40.620 fused_ordering(430) 00:13:40.620 fused_ordering(431) 00:13:40.620 fused_ordering(432) 00:13:40.620 fused_ordering(433) 00:13:40.620 fused_ordering(434) 00:13:40.620 fused_ordering(435) 00:13:40.620 fused_ordering(436) 00:13:40.620 fused_ordering(437) 00:13:40.620 fused_ordering(438) 00:13:40.620 fused_ordering(439) 00:13:40.620 fused_ordering(440) 00:13:40.620 fused_ordering(441) 00:13:40.620 fused_ordering(442) 00:13:40.620 fused_ordering(443) 00:13:40.620 fused_ordering(444) 00:13:40.620 fused_ordering(445) 00:13:40.620 fused_ordering(446) 00:13:40.620 fused_ordering(447) 00:13:40.620 fused_ordering(448) 00:13:40.620 fused_ordering(449) 00:13:40.620 fused_ordering(450) 00:13:40.620 fused_ordering(451) 00:13:40.620 fused_ordering(452) 00:13:40.620 fused_ordering(453) 00:13:40.620 fused_ordering(454) 00:13:40.620 fused_ordering(455) 00:13:40.620 fused_ordering(456) 00:13:40.620 fused_ordering(457) 00:13:40.620 fused_ordering(458) 00:13:40.620 fused_ordering(459) 00:13:40.620 fused_ordering(460) 00:13:40.620 fused_ordering(461) 00:13:40.620 fused_ordering(462) 00:13:40.620 fused_ordering(463) 00:13:40.620 fused_ordering(464) 00:13:40.620 fused_ordering(465) 00:13:40.620 fused_ordering(466) 00:13:40.620 fused_ordering(467) 00:13:40.620 fused_ordering(468) 00:13:40.620 fused_ordering(469) 00:13:40.620 fused_ordering(470) 00:13:40.620 fused_ordering(471) 00:13:40.620 fused_ordering(472) 00:13:40.620 fused_ordering(473) 00:13:40.620 fused_ordering(474) 00:13:40.620 fused_ordering(475) 00:13:40.620 fused_ordering(476) 00:13:40.620 fused_ordering(477) 00:13:40.620 fused_ordering(478) 00:13:40.620 fused_ordering(479) 00:13:40.620 fused_ordering(480) 00:13:40.620 fused_ordering(481) 00:13:40.620 fused_ordering(482) 00:13:40.620 fused_ordering(483) 00:13:40.620 fused_ordering(484) 00:13:40.620 fused_ordering(485) 00:13:40.620 fused_ordering(486) 00:13:40.620 fused_ordering(487) 00:13:40.620 fused_ordering(488) 00:13:40.620 fused_ordering(489) 00:13:40.620 fused_ordering(490) 00:13:40.620 fused_ordering(491) 00:13:40.620 fused_ordering(492) 00:13:40.620 fused_ordering(493) 00:13:40.620 fused_ordering(494) 00:13:40.620 fused_ordering(495) 00:13:40.620 fused_ordering(496) 00:13:40.620 fused_ordering(497) 00:13:40.620 fused_ordering(498) 00:13:40.620 fused_ordering(499) 00:13:40.620 fused_ordering(500) 00:13:40.620 fused_ordering(501) 00:13:40.620 fused_ordering(502) 00:13:40.620 fused_ordering(503) 00:13:40.620 fused_ordering(504) 00:13:40.620 fused_ordering(505) 00:13:40.620 fused_ordering(506) 00:13:40.620 fused_ordering(507) 00:13:40.620 fused_ordering(508) 00:13:40.620 fused_ordering(509) 00:13:40.620 fused_ordering(510) 00:13:40.620 fused_ordering(511) 00:13:40.620 fused_ordering(512) 00:13:40.620 fused_ordering(513) 00:13:40.620 fused_ordering(514) 00:13:40.620 fused_ordering(515) 00:13:40.620 fused_ordering(516) 00:13:40.620 fused_ordering(517) 00:13:40.620 fused_ordering(518) 00:13:40.620 fused_ordering(519) 00:13:40.620 fused_ordering(520) 00:13:40.620 fused_ordering(521) 00:13:40.620 fused_ordering(522) 00:13:40.620 fused_ordering(523) 00:13:40.620 fused_ordering(524) 00:13:40.620 fused_ordering(525) 00:13:40.620 fused_ordering(526) 00:13:40.620 fused_ordering(527) 00:13:40.620 fused_ordering(528) 00:13:40.620 fused_ordering(529) 00:13:40.620 fused_ordering(530) 00:13:40.620 fused_ordering(531) 00:13:40.620 fused_ordering(532) 00:13:40.620 fused_ordering(533) 00:13:40.620 fused_ordering(534) 00:13:40.620 fused_ordering(535) 00:13:40.620 fused_ordering(536) 00:13:40.620 fused_ordering(537) 00:13:40.620 fused_ordering(538) 00:13:40.620 fused_ordering(539) 00:13:40.620 fused_ordering(540) 00:13:40.621 fused_ordering(541) 00:13:40.621 fused_ordering(542) 00:13:40.621 fused_ordering(543) 00:13:40.621 fused_ordering(544) 00:13:40.621 fused_ordering(545) 00:13:40.621 fused_ordering(546) 00:13:40.621 fused_ordering(547) 00:13:40.621 fused_ordering(548) 00:13:40.621 fused_ordering(549) 00:13:40.621 fused_ordering(550) 00:13:40.621 fused_ordering(551) 00:13:40.621 fused_ordering(552) 00:13:40.621 fused_ordering(553) 00:13:40.621 fused_ordering(554) 00:13:40.621 fused_ordering(555) 00:13:40.621 fused_ordering(556) 00:13:40.621 fused_ordering(557) 00:13:40.621 fused_ordering(558) 00:13:40.621 fused_ordering(559) 00:13:40.621 fused_ordering(560) 00:13:40.621 fused_ordering(561) 00:13:40.621 fused_ordering(562) 00:13:40.621 fused_ordering(563) 00:13:40.621 fused_ordering(564) 00:13:40.621 fused_ordering(565) 00:13:40.621 fused_ordering(566) 00:13:40.621 fused_ordering(567) 00:13:40.621 fused_ordering(568) 00:13:40.621 fused_ordering(569) 00:13:40.621 fused_ordering(570) 00:13:40.621 fused_ordering(571) 00:13:40.621 fused_ordering(572) 00:13:40.621 fused_ordering(573) 00:13:40.621 fused_ordering(574) 00:13:40.621 fused_ordering(575) 00:13:40.621 fused_ordering(576) 00:13:40.621 fused_ordering(577) 00:13:40.621 fused_ordering(578) 00:13:40.621 fused_ordering(579) 00:13:40.621 fused_ordering(580) 00:13:40.621 fused_ordering(581) 00:13:40.621 fused_ordering(582) 00:13:40.621 fused_ordering(583) 00:13:40.621 fused_ordering(584) 00:13:40.621 fused_ordering(585) 00:13:40.621 fused_ordering(586) 00:13:40.621 fused_ordering(587) 00:13:40.621 fused_ordering(588) 00:13:40.621 fused_ordering(589) 00:13:40.621 fused_ordering(590) 00:13:40.621 fused_ordering(591) 00:13:40.621 fused_ordering(592) 00:13:40.621 fused_ordering(593) 00:13:40.621 fused_ordering(594) 00:13:40.621 fused_ordering(595) 00:13:40.621 fused_ordering(596) 00:13:40.621 fused_ordering(597) 00:13:40.621 fused_ordering(598) 00:13:40.621 fused_ordering(599) 00:13:40.621 fused_ordering(600) 00:13:40.621 fused_ordering(601) 00:13:40.621 fused_ordering(602) 00:13:40.621 fused_ordering(603) 00:13:40.621 fused_ordering(604) 00:13:40.621 fused_ordering(605) 00:13:40.621 fused_ordering(606) 00:13:40.621 fused_ordering(607) 00:13:40.621 fused_ordering(608) 00:13:40.621 fused_ordering(609) 00:13:40.621 fused_ordering(610) 00:13:40.621 fused_ordering(611) 00:13:40.621 fused_ordering(612) 00:13:40.621 fused_ordering(613) 00:13:40.621 fused_ordering(614) 00:13:40.621 fused_ordering(615) 00:13:41.232 fused_ordering(616) 00:13:41.232 fused_ordering(617) 00:13:41.232 fused_ordering(618) 00:13:41.232 fused_ordering(619) 00:13:41.232 fused_ordering(620) 00:13:41.232 fused_ordering(621) 00:13:41.232 fused_ordering(622) 00:13:41.232 fused_ordering(623) 00:13:41.232 fused_ordering(624) 00:13:41.232 fused_ordering(625) 00:13:41.232 fused_ordering(626) 00:13:41.232 fused_ordering(627) 00:13:41.232 fused_ordering(628) 00:13:41.232 fused_ordering(629) 00:13:41.232 fused_ordering(630) 00:13:41.232 fused_ordering(631) 00:13:41.232 fused_ordering(632) 00:13:41.232 fused_ordering(633) 00:13:41.232 fused_ordering(634) 00:13:41.232 fused_ordering(635) 00:13:41.232 fused_ordering(636) 00:13:41.232 fused_ordering(637) 00:13:41.232 fused_ordering(638) 00:13:41.232 fused_ordering(639) 00:13:41.232 fused_ordering(640) 00:13:41.232 fused_ordering(641) 00:13:41.232 fused_ordering(642) 00:13:41.232 fused_ordering(643) 00:13:41.232 fused_ordering(644) 00:13:41.232 fused_ordering(645) 00:13:41.232 fused_ordering(646) 00:13:41.232 fused_ordering(647) 00:13:41.232 fused_ordering(648) 00:13:41.232 fused_ordering(649) 00:13:41.232 fused_ordering(650) 00:13:41.232 fused_ordering(651) 00:13:41.232 fused_ordering(652) 00:13:41.232 fused_ordering(653) 00:13:41.232 fused_ordering(654) 00:13:41.232 fused_ordering(655) 00:13:41.232 fused_ordering(656) 00:13:41.232 fused_ordering(657) 00:13:41.232 fused_ordering(658) 00:13:41.232 fused_ordering(659) 00:13:41.232 fused_ordering(660) 00:13:41.232 fused_ordering(661) 00:13:41.232 fused_ordering(662) 00:13:41.232 fused_ordering(663) 00:13:41.232 fused_ordering(664) 00:13:41.232 fused_ordering(665) 00:13:41.232 fused_ordering(666) 00:13:41.232 fused_ordering(667) 00:13:41.232 fused_ordering(668) 00:13:41.232 fused_ordering(669) 00:13:41.232 fused_ordering(670) 00:13:41.232 fused_ordering(671) 00:13:41.232 fused_ordering(672) 00:13:41.232 fused_ordering(673) 00:13:41.232 fused_ordering(674) 00:13:41.232 fused_ordering(675) 00:13:41.232 fused_ordering(676) 00:13:41.232 fused_ordering(677) 00:13:41.232 fused_ordering(678) 00:13:41.232 fused_ordering(679) 00:13:41.232 fused_ordering(680) 00:13:41.232 fused_ordering(681) 00:13:41.232 fused_ordering(682) 00:13:41.232 fused_ordering(683) 00:13:41.232 fused_ordering(684) 00:13:41.232 fused_ordering(685) 00:13:41.232 fused_ordering(686) 00:13:41.232 fused_ordering(687) 00:13:41.232 fused_ordering(688) 00:13:41.232 fused_ordering(689) 00:13:41.232 fused_ordering(690) 00:13:41.232 fused_ordering(691) 00:13:41.232 fused_ordering(692) 00:13:41.232 fused_ordering(693) 00:13:41.232 fused_ordering(694) 00:13:41.232 fused_ordering(695) 00:13:41.232 fused_ordering(696) 00:13:41.232 fused_ordering(697) 00:13:41.232 fused_ordering(698) 00:13:41.232 fused_ordering(699) 00:13:41.232 fused_ordering(700) 00:13:41.232 fused_ordering(701) 00:13:41.232 fused_ordering(702) 00:13:41.232 fused_ordering(703) 00:13:41.232 fused_ordering(704) 00:13:41.232 fused_ordering(705) 00:13:41.232 fused_ordering(706) 00:13:41.232 fused_ordering(707) 00:13:41.232 fused_ordering(708) 00:13:41.232 fused_ordering(709) 00:13:41.232 fused_ordering(710) 00:13:41.232 fused_ordering(711) 00:13:41.232 fused_ordering(712) 00:13:41.232 fused_ordering(713) 00:13:41.232 fused_ordering(714) 00:13:41.232 fused_ordering(715) 00:13:41.232 fused_ordering(716) 00:13:41.232 fused_ordering(717) 00:13:41.232 fused_ordering(718) 00:13:41.232 fused_ordering(719) 00:13:41.232 fused_ordering(720) 00:13:41.232 fused_ordering(721) 00:13:41.232 fused_ordering(722) 00:13:41.232 fused_ordering(723) 00:13:41.232 fused_ordering(724) 00:13:41.232 fused_ordering(725) 00:13:41.232 fused_ordering(726) 00:13:41.232 fused_ordering(727) 00:13:41.232 fused_ordering(728) 00:13:41.232 fused_ordering(729) 00:13:41.232 fused_ordering(730) 00:13:41.232 fused_ordering(731) 00:13:41.232 fused_ordering(732) 00:13:41.232 fused_ordering(733) 00:13:41.232 fused_ordering(734) 00:13:41.232 fused_ordering(735) 00:13:41.232 fused_ordering(736) 00:13:41.232 fused_ordering(737) 00:13:41.232 fused_ordering(738) 00:13:41.232 fused_ordering(739) 00:13:41.232 fused_ordering(740) 00:13:41.232 fused_ordering(741) 00:13:41.232 fused_ordering(742) 00:13:41.232 fused_ordering(743) 00:13:41.232 fused_ordering(744) 00:13:41.232 fused_ordering(745) 00:13:41.232 fused_ordering(746) 00:13:41.232 fused_ordering(747) 00:13:41.232 fused_ordering(748) 00:13:41.232 fused_ordering(749) 00:13:41.232 fused_ordering(750) 00:13:41.232 fused_ordering(751) 00:13:41.232 fused_ordering(752) 00:13:41.232 fused_ordering(753) 00:13:41.232 fused_ordering(754) 00:13:41.232 fused_ordering(755) 00:13:41.232 fused_ordering(756) 00:13:41.232 fused_ordering(757) 00:13:41.232 fused_ordering(758) 00:13:41.232 fused_ordering(759) 00:13:41.232 fused_ordering(760) 00:13:41.232 fused_ordering(761) 00:13:41.232 fused_ordering(762) 00:13:41.232 fused_ordering(763) 00:13:41.232 fused_ordering(764) 00:13:41.232 fused_ordering(765) 00:13:41.232 fused_ordering(766) 00:13:41.232 fused_ordering(767) 00:13:41.232 fused_ordering(768) 00:13:41.232 fused_ordering(769) 00:13:41.232 fused_ordering(770) 00:13:41.232 fused_ordering(771) 00:13:41.232 fused_ordering(772) 00:13:41.232 fused_ordering(773) 00:13:41.232 fused_ordering(774) 00:13:41.232 fused_ordering(775) 00:13:41.232 fused_ordering(776) 00:13:41.232 fused_ordering(777) 00:13:41.232 fused_ordering(778) 00:13:41.232 fused_ordering(779) 00:13:41.232 fused_ordering(780) 00:13:41.232 fused_ordering(781) 00:13:41.232 fused_ordering(782) 00:13:41.232 fused_ordering(783) 00:13:41.232 fused_ordering(784) 00:13:41.232 fused_ordering(785) 00:13:41.232 fused_ordering(786) 00:13:41.232 fused_ordering(787) 00:13:41.232 fused_ordering(788) 00:13:41.232 fused_ordering(789) 00:13:41.232 fused_ordering(790) 00:13:41.232 fused_ordering(791) 00:13:41.232 fused_ordering(792) 00:13:41.232 fused_ordering(793) 00:13:41.232 fused_ordering(794) 00:13:41.232 fused_ordering(795) 00:13:41.232 fused_ordering(796) 00:13:41.232 fused_ordering(797) 00:13:41.232 fused_ordering(798) 00:13:41.232 fused_ordering(799) 00:13:41.232 fused_ordering(800) 00:13:41.232 fused_ordering(801) 00:13:41.232 fused_ordering(802) 00:13:41.232 fused_ordering(803) 00:13:41.232 fused_ordering(804) 00:13:41.232 fused_ordering(805) 00:13:41.232 fused_ordering(806) 00:13:41.232 fused_ordering(807) 00:13:41.232 fused_ordering(808) 00:13:41.232 fused_ordering(809) 00:13:41.232 fused_ordering(810) 00:13:41.232 fused_ordering(811) 00:13:41.232 fused_ordering(812) 00:13:41.232 fused_ordering(813) 00:13:41.232 fused_ordering(814) 00:13:41.232 fused_ordering(815) 00:13:41.232 fused_ordering(816) 00:13:41.232 fused_ordering(817) 00:13:41.232 fused_ordering(818) 00:13:41.232 fused_ordering(819) 00:13:41.232 fused_ordering(820) 00:13:41.492 fused_ordering(821) 00:13:41.492 fused_ordering(822) 00:13:41.492 fused_ordering(823) 00:13:41.492 fused_ordering(824) 00:13:41.492 fused_ordering(825) 00:13:41.492 fused_ordering(826) 00:13:41.492 fused_ordering(827) 00:13:41.492 fused_ordering(828) 00:13:41.492 fused_ordering(829) 00:13:41.492 fused_ordering(830) 00:13:41.492 fused_ordering(831) 00:13:41.492 fused_ordering(832) 00:13:41.492 fused_ordering(833) 00:13:41.492 fused_ordering(834) 00:13:41.492 fused_ordering(835) 00:13:41.492 fused_ordering(836) 00:13:41.492 fused_ordering(837) 00:13:41.492 fused_ordering(838) 00:13:41.492 fused_ordering(839) 00:13:41.492 fused_ordering(840) 00:13:41.492 fused_ordering(841) 00:13:41.492 fused_ordering(842) 00:13:41.492 fused_ordering(843) 00:13:41.492 fused_ordering(844) 00:13:41.492 fused_ordering(845) 00:13:41.492 fused_ordering(846) 00:13:41.492 fused_ordering(847) 00:13:41.492 fused_ordering(848) 00:13:41.492 fused_ordering(849) 00:13:41.492 fused_ordering(850) 00:13:41.492 fused_ordering(851) 00:13:41.492 fused_ordering(852) 00:13:41.492 fused_ordering(853) 00:13:41.492 fused_ordering(854) 00:13:41.492 fused_ordering(855) 00:13:41.492 fused_ordering(856) 00:13:41.492 fused_ordering(857) 00:13:41.492 fused_ordering(858) 00:13:41.492 fused_ordering(859) 00:13:41.492 fused_ordering(860) 00:13:41.492 fused_ordering(861) 00:13:41.492 fused_ordering(862) 00:13:41.492 fused_ordering(863) 00:13:41.492 fused_ordering(864) 00:13:41.492 fused_ordering(865) 00:13:41.492 fused_ordering(866) 00:13:41.492 fused_ordering(867) 00:13:41.492 fused_ordering(868) 00:13:41.492 fused_ordering(869) 00:13:41.492 fused_ordering(870) 00:13:41.492 fused_ordering(871) 00:13:41.492 fused_ordering(872) 00:13:41.492 fused_ordering(873) 00:13:41.492 fused_ordering(874) 00:13:41.492 fused_ordering(875) 00:13:41.492 fused_ordering(876) 00:13:41.492 fused_ordering(877) 00:13:41.492 fused_ordering(878) 00:13:41.492 fused_ordering(879) 00:13:41.492 fused_ordering(880) 00:13:41.492 fused_ordering(881) 00:13:41.492 fused_ordering(882) 00:13:41.492 fused_ordering(883) 00:13:41.492 fused_ordering(884) 00:13:41.492 fused_ordering(885) 00:13:41.492 fused_ordering(886) 00:13:41.492 fused_ordering(887) 00:13:41.492 fused_ordering(888) 00:13:41.492 fused_ordering(889) 00:13:41.492 fused_ordering(890) 00:13:41.492 fused_ordering(891) 00:13:41.492 fused_ordering(892) 00:13:41.492 fused_ordering(893) 00:13:41.492 fused_ordering(894) 00:13:41.492 fused_ordering(895) 00:13:41.492 fused_ordering(896) 00:13:41.492 fused_ordering(897) 00:13:41.492 fused_ordering(898) 00:13:41.492 fused_ordering(899) 00:13:41.492 fused_ordering(900) 00:13:41.492 fused_ordering(901) 00:13:41.492 fused_ordering(902) 00:13:41.492 fused_ordering(903) 00:13:41.492 fused_ordering(904) 00:13:41.492 fused_ordering(905) 00:13:41.492 fused_ordering(906) 00:13:41.492 fused_ordering(907) 00:13:41.492 fused_ordering(908) 00:13:41.492 fused_ordering(909) 00:13:41.492 fused_ordering(910) 00:13:41.492 fused_ordering(911) 00:13:41.492 fused_ordering(912) 00:13:41.492 fused_ordering(913) 00:13:41.492 fused_ordering(914) 00:13:41.492 fused_ordering(915) 00:13:41.492 fused_ordering(916) 00:13:41.492 fused_ordering(917) 00:13:41.492 fused_ordering(918) 00:13:41.492 fused_ordering(919) 00:13:41.492 fused_ordering(920) 00:13:41.492 fused_ordering(921) 00:13:41.492 fused_ordering(922) 00:13:41.492 fused_ordering(923) 00:13:41.492 fused_ordering(924) 00:13:41.492 fused_ordering(925) 00:13:41.492 fused_ordering(926) 00:13:41.492 fused_ordering(927) 00:13:41.492 fused_ordering(928) 00:13:41.492 fused_ordering(929) 00:13:41.492 fused_ordering(930) 00:13:41.492 fused_ordering(931) 00:13:41.492 fused_ordering(932) 00:13:41.492 fused_ordering(933) 00:13:41.492 fused_ordering(934) 00:13:41.492 fused_ordering(935) 00:13:41.492 fused_ordering(936) 00:13:41.492 fused_ordering(937) 00:13:41.492 fused_ordering(938) 00:13:41.492 fused_ordering(939) 00:13:41.492 fused_ordering(940) 00:13:41.492 fused_ordering(941) 00:13:41.492 fused_ordering(942) 00:13:41.492 fused_ordering(943) 00:13:41.492 fused_ordering(944) 00:13:41.492 fused_ordering(945) 00:13:41.492 fused_ordering(946) 00:13:41.492 fused_ordering(947) 00:13:41.492 fused_ordering(948) 00:13:41.492 fused_ordering(949) 00:13:41.492 fused_ordering(950) 00:13:41.492 fused_ordering(951) 00:13:41.492 fused_ordering(952) 00:13:41.492 fused_ordering(953) 00:13:41.492 fused_ordering(954) 00:13:41.492 fused_ordering(955) 00:13:41.492 fused_ordering(956) 00:13:41.492 fused_ordering(957) 00:13:41.492 fused_ordering(958) 00:13:41.492 fused_ordering(959) 00:13:41.492 fused_ordering(960) 00:13:41.492 fused_ordering(961) 00:13:41.492 fused_ordering(962) 00:13:41.492 fused_ordering(963) 00:13:41.492 fused_ordering(964) 00:13:41.492 fused_ordering(965) 00:13:41.492 fused_ordering(966) 00:13:41.492 fused_ordering(967) 00:13:41.492 fused_ordering(968) 00:13:41.492 fused_ordering(969) 00:13:41.492 fused_ordering(970) 00:13:41.492 fused_ordering(971) 00:13:41.492 fused_ordering(972) 00:13:41.492 fused_ordering(973) 00:13:41.492 fused_ordering(974) 00:13:41.492 fused_ordering(975) 00:13:41.492 fused_ordering(976) 00:13:41.492 fused_ordering(977) 00:13:41.492 fused_ordering(978) 00:13:41.492 fused_ordering(979) 00:13:41.492 fused_ordering(980) 00:13:41.492 fused_ordering(981) 00:13:41.492 fused_ordering(982) 00:13:41.492 fused_ordering(983) 00:13:41.492 fused_ordering(984) 00:13:41.492 fused_ordering(985) 00:13:41.492 fused_ordering(986) 00:13:41.492 fused_ordering(987) 00:13:41.492 fused_ordering(988) 00:13:41.492 fused_ordering(989) 00:13:41.492 fused_ordering(990) 00:13:41.492 fused_ordering(991) 00:13:41.492 fused_ordering(992) 00:13:41.492 fused_ordering(993) 00:13:41.492 fused_ordering(994) 00:13:41.492 fused_ordering(995) 00:13:41.492 fused_ordering(996) 00:13:41.492 fused_ordering(997) 00:13:41.492 fused_ordering(998) 00:13:41.492 fused_ordering(999) 00:13:41.492 fused_ordering(1000) 00:13:41.492 fused_ordering(1001) 00:13:41.492 fused_ordering(1002) 00:13:41.492 fused_ordering(1003) 00:13:41.492 fused_ordering(1004) 00:13:41.492 fused_ordering(1005) 00:13:41.492 fused_ordering(1006) 00:13:41.492 fused_ordering(1007) 00:13:41.492 fused_ordering(1008) 00:13:41.492 fused_ordering(1009) 00:13:41.492 fused_ordering(1010) 00:13:41.492 fused_ordering(1011) 00:13:41.492 fused_ordering(1012) 00:13:41.492 fused_ordering(1013) 00:13:41.492 fused_ordering(1014) 00:13:41.492 fused_ordering(1015) 00:13:41.492 fused_ordering(1016) 00:13:41.492 fused_ordering(1017) 00:13:41.492 fused_ordering(1018) 00:13:41.492 fused_ordering(1019) 00:13:41.492 fused_ordering(1020) 00:13:41.492 fused_ordering(1021) 00:13:41.492 fused_ordering(1022) 00:13:41.492 fused_ordering(1023) 00:13:41.492 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:41.493 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:41.493 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:41.493 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:41.493 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:41.493 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:41.493 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:41.493 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:41.493 rmmod nvme_tcp 00:13:41.493 rmmod nvme_fabrics 00:13:41.493 rmmod nvme_keyring 00:13:41.493 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:41.493 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:41.493 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:41.493 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 372422 ']' 00:13:41.493 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 372422 00:13:41.493 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 372422 ']' 00:13:41.493 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 372422 00:13:41.493 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:41.493 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:41.493 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 372422 00:13:41.752 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:41.752 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:41.752 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 372422' 00:13:41.752 killing process with pid 372422 00:13:41.752 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 372422 00:13:41.752 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 372422 00:13:41.752 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:41.752 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:41.752 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:41.752 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:41.752 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:13:41.752 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:41.752 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:13:41.752 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:41.752 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:41.752 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.752 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:41.752 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:44.291 00:13:44.291 real 0m11.331s 00:13:44.291 user 0m5.711s 00:13:44.291 sys 0m5.916s 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.291 ************************************ 00:13:44.291 END TEST nvmf_fused_ordering 00:13:44.291 ************************************ 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:44.291 ************************************ 00:13:44.291 START TEST nvmf_ns_masking 00:13:44.291 ************************************ 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:44.291 * Looking for test storage... 00:13:44.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:44.291 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:44.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.292 --rc genhtml_branch_coverage=1 00:13:44.292 --rc genhtml_function_coverage=1 00:13:44.292 --rc genhtml_legend=1 00:13:44.292 --rc geninfo_all_blocks=1 00:13:44.292 --rc geninfo_unexecuted_blocks=1 00:13:44.292 00:13:44.292 ' 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:44.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.292 --rc genhtml_branch_coverage=1 00:13:44.292 --rc genhtml_function_coverage=1 00:13:44.292 --rc genhtml_legend=1 00:13:44.292 --rc geninfo_all_blocks=1 00:13:44.292 --rc geninfo_unexecuted_blocks=1 00:13:44.292 00:13:44.292 ' 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:44.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.292 --rc genhtml_branch_coverage=1 00:13:44.292 --rc genhtml_function_coverage=1 00:13:44.292 --rc genhtml_legend=1 00:13:44.292 --rc geninfo_all_blocks=1 00:13:44.292 --rc geninfo_unexecuted_blocks=1 00:13:44.292 00:13:44.292 ' 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:44.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.292 --rc genhtml_branch_coverage=1 00:13:44.292 --rc genhtml_function_coverage=1 00:13:44.292 --rc genhtml_legend=1 00:13:44.292 --rc geninfo_all_blocks=1 00:13:44.292 --rc geninfo_unexecuted_blocks=1 00:13:44.292 00:13:44.292 ' 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:44.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=a2c85655-a2da-4761-9a97-a56267f2967e 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=8f4f9e25-56a4-4431-950a-4278672a03b7 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c92d9876-8d7a-45ca-b42a-aa7edce5f3a1 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:44.292 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:50.865 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:50.866 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:50.866 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:50.866 Found net devices under 0000:86:00.0: cvl_0_0 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:50.866 Found net devices under 0000:86:00.1: cvl_0_1 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:50.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:13:50.866 00:13:50.866 --- 10.0.0.2 ping statistics --- 00:13:50.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.866 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:13:50.866 00:13:50.866 --- 10.0.0.1 ping statistics --- 00:13:50.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.866 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=376439 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 376439 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 376439 ']' 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:50.866 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:50.866 [2024-10-08 18:21:43.468053] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:13:50.866 [2024-10-08 18:21:43.468117] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.866 [2024-10-08 18:21:43.542332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.866 [2024-10-08 18:21:43.616630] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.866 [2024-10-08 18:21:43.616666] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.866 [2024-10-08 18:21:43.616673] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.866 [2024-10-08 18:21:43.616679] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.866 [2024-10-08 18:21:43.616684] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.866 [2024-10-08 18:21:43.617226] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.125 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.125 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:51.125 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:51.125 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:51.125 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:51.125 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.125 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:51.384 [2024-10-08 18:21:44.490720] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.384 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:51.384 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:51.384 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:51.642 Malloc1 00:13:51.642 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:51.642 Malloc2 00:13:51.642 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:51.901 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:52.161 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.161 [2024-10-08 18:21:45.460555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.420 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:52.420 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c92d9876-8d7a-45ca-b42a-aa7edce5f3a1 -a 10.0.0.2 -s 4420 -i 4 00:13:52.420 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:52.420 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:52.420 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.420 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:52.420 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:54.956 [ 0]:0x1 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=501c76c694a74b6a9f1438c829ce690a 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 501c76c694a74b6a9f1438c829ce690a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.956 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:54.956 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:54.956 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:54.956 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:54.956 [ 0]:0x1 00:13:54.956 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:54.956 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:54.956 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=501c76c694a74b6a9f1438c829ce690a 00:13:54.956 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 501c76c694a74b6a9f1438c829ce690a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.956 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:54.956 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:54.956 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:54.956 [ 1]:0x2 00:13:54.956 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:54.956 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:54.956 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acb1a7bfb64a4229b3087502dcb7672e 00:13:54.956 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acb1a7bfb64a4229b3087502dcb7672e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.956 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:54.956 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:54.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.956 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.215 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:55.474 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:55.474 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c92d9876-8d7a-45ca-b42a-aa7edce5f3a1 -a 10.0.0.2 -s 4420 -i 4 00:13:55.474 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:55.474 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:55.474 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:55.474 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:55.474 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:55.474 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.010 [ 0]:0x2 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acb1a7bfb64a4229b3087502dcb7672e 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acb1a7bfb64a4229b3087502dcb7672e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.010 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:58.010 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:58.010 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.010 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:58.010 [ 0]:0x1 00:13:58.010 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.010 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.010 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=501c76c694a74b6a9f1438c829ce690a 00:13:58.010 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 501c76c694a74b6a9f1438c829ce690a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.010 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:58.010 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:58.010 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.010 [ 1]:0x2 00:13:58.010 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.010 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.010 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acb1a7bfb64a4229b3087502dcb7672e 00:13:58.010 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acb1a7bfb64a4229b3087502dcb7672e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.010 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:58.270 [ 0]:0x2 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acb1a7bfb64a4229b3087502dcb7672e 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acb1a7bfb64a4229b3087502dcb7672e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.270 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:58.530 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:58.531 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c92d9876-8d7a-45ca-b42a-aa7edce5f3a1 -a 10.0.0.2 -s 4420 -i 4 00:13:58.789 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:58.789 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:58.789 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:58.789 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:58.789 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:58.789 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:00.694 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:00.694 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:00.694 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:00.694 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:00.694 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:00.695 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:00.695 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:00.695 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:00.954 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:00.954 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:00.954 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:00.954 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.954 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:00.954 [ 0]:0x1 00:14:00.954 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:00.954 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:00.954 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=501c76c694a74b6a9f1438c829ce690a 00:14:00.954 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 501c76c694a74b6a9f1438c829ce690a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:00.954 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:00.954 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.954 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:00.954 [ 1]:0x2 00:14:00.954 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:00.954 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.213 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acb1a7bfb64a4229b3087502dcb7672e 00:14:01.213 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acb1a7bfb64a4229b3087502dcb7672e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.213 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:01.213 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:01.213 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:01.213 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:01.213 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:01.213 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.213 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:01.213 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.213 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:01.213 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:01.213 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.213 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:01.213 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.471 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:01.471 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.471 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:01.471 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:01.471 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:01.471 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:01.471 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:01.471 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.471 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:01.471 [ 0]:0x2 00:14:01.471 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:01.471 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.471 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acb1a7bfb64a4229b3087502dcb7672e 00:14:01.471 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acb1a7bfb64a4229b3087502dcb7672e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.471 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:01.472 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:01.472 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:01.472 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.472 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.472 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.472 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.472 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.472 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.472 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.472 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:01.472 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:01.472 [2024-10-08 18:21:54.783553] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:01.472 request: 00:14:01.472 { 00:14:01.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.472 "nsid": 2, 00:14:01.472 "host": "nqn.2016-06.io.spdk:host1", 00:14:01.472 "method": "nvmf_ns_remove_host", 00:14:01.472 "req_id": 1 00:14:01.472 } 00:14:01.472 Got JSON-RPC error response 00:14:01.472 response: 00:14:01.472 { 00:14:01.472 "code": -32602, 00:14:01.472 "message": "Invalid parameters" 00:14:01.472 } 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:01.731 [ 0]:0x2 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acb1a7bfb64a4229b3087502dcb7672e 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acb1a7bfb64a4229b3087502dcb7672e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:01.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=378443 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 378443 /var/tmp/host.sock 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 378443 ']' 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:01.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:01.731 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:01.731 [2024-10-08 18:21:55.019118] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:14:01.731 [2024-10-08 18:21:55.019165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid378443 ] 00:14:01.991 [2024-10-08 18:21:55.088272] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.991 [2024-10-08 18:21:55.160509] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.558 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:02.558 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:02.558 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.816 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.076 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid a2c85655-a2da-4761-9a97-a56267f2967e 00:14:03.076 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:03.076 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A2C85655A2DA47619A97A56267F2967E -i 00:14:03.335 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 8f4f9e25-56a4-4431-950a-4278672a03b7 00:14:03.335 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:03.335 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 8F4F9E2556A44431950A4278672A03B7 -i 00:14:03.335 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:03.593 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:03.852 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:03.852 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:04.112 nvme0n1 00:14:04.112 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:04.112 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:04.371 nvme1n2 00:14:04.371 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:04.371 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:04.371 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:04.371 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:04.371 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:04.630 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:04.630 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:04.630 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:04.630 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:04.890 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ a2c85655-a2da-4761-9a97-a56267f2967e == \a\2\c\8\5\6\5\5\-\a\2\d\a\-\4\7\6\1\-\9\a\9\7\-\a\5\6\2\6\7\f\2\9\6\7\e ]] 00:14:04.890 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:04.890 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:04.890 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:05.151 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 8f4f9e25-56a4-4431-950a-4278672a03b7 == \8\f\4\f\9\e\2\5\-\5\6\a\4\-\4\4\3\1\-\9\5\0\a\-\4\2\7\8\6\7\2\a\0\3\b\7 ]] 00:14:05.151 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 378443 00:14:05.151 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 378443 ']' 00:14:05.151 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 378443 00:14:05.151 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:05.151 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:05.151 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 378443 00:14:05.151 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:05.151 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:05.151 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 378443' 00:14:05.151 killing process with pid 378443 00:14:05.151 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 378443 00:14:05.151 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 378443 00:14:05.411 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:05.670 rmmod nvme_tcp 00:14:05.670 rmmod nvme_fabrics 00:14:05.670 rmmod nvme_keyring 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 376439 ']' 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 376439 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 376439 ']' 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 376439 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 376439 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 376439' 00:14:05.670 killing process with pid 376439 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 376439 00:14:05.670 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 376439 00:14:05.929 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:05.929 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:05.929 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:05.929 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:05.929 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:14:05.929 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:05.929 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:14:05.929 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:05.929 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:05.929 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.929 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.929 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:08.465 00:14:08.465 real 0m24.051s 00:14:08.465 user 0m25.998s 00:14:08.465 sys 0m6.835s 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:08.465 ************************************ 00:14:08.465 END TEST nvmf_ns_masking 00:14:08.465 ************************************ 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:08.465 ************************************ 00:14:08.465 START TEST nvmf_nvme_cli 00:14:08.465 ************************************ 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:08.465 * Looking for test storage... 00:14:08.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:08.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.465 --rc genhtml_branch_coverage=1 00:14:08.465 --rc genhtml_function_coverage=1 00:14:08.465 --rc genhtml_legend=1 00:14:08.465 --rc geninfo_all_blocks=1 00:14:08.465 --rc geninfo_unexecuted_blocks=1 00:14:08.465 00:14:08.465 ' 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:08.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.465 --rc genhtml_branch_coverage=1 00:14:08.465 --rc genhtml_function_coverage=1 00:14:08.465 --rc genhtml_legend=1 00:14:08.465 --rc geninfo_all_blocks=1 00:14:08.465 --rc geninfo_unexecuted_blocks=1 00:14:08.465 00:14:08.465 ' 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:08.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.465 --rc genhtml_branch_coverage=1 00:14:08.465 --rc genhtml_function_coverage=1 00:14:08.465 --rc genhtml_legend=1 00:14:08.465 --rc geninfo_all_blocks=1 00:14:08.465 --rc geninfo_unexecuted_blocks=1 00:14:08.465 00:14:08.465 ' 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:08.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.465 --rc genhtml_branch_coverage=1 00:14:08.465 --rc genhtml_function_coverage=1 00:14:08.465 --rc genhtml_legend=1 00:14:08.465 --rc geninfo_all_blocks=1 00:14:08.465 --rc geninfo_unexecuted_blocks=1 00:14:08.465 00:14:08.465 ' 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.465 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:08.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:08.466 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.036 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:15.037 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:15.037 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:15.037 Found net devices under 0000:86:00.0: cvl_0_0 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:15.037 Found net devices under 0000:86:00.1: cvl_0_1 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:15.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:14:15.037 00:14:15.037 --- 10.0.0.2 ping statistics --- 00:14:15.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.037 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:15.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:14:15.037 00:14:15.037 --- 10.0.0.1 ping statistics --- 00:14:15.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.037 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=382684 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 382684 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 382684 ']' 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.037 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.037 [2024-10-08 18:22:07.588146] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:14:15.038 [2024-10-08 18:22:07.588191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.038 [2024-10-08 18:22:07.661566] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:15.038 [2024-10-08 18:22:07.740046] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.038 [2024-10-08 18:22:07.740085] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.038 [2024-10-08 18:22:07.740092] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.038 [2024-10-08 18:22:07.740098] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.038 [2024-10-08 18:22:07.740103] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.038 [2024-10-08 18:22:07.741543] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.038 [2024-10-08 18:22:07.741582] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.038 [2024-10-08 18:22:07.741690] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.038 [2024-10-08 18:22:07.741691] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.296 [2024-10-08 18:22:08.477780] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.296 Malloc0 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.296 Malloc1 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.296 [2024-10-08 18:22:08.554970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.296 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:15.555 00:14:15.555 Discovery Log Number of Records 2, Generation counter 2 00:14:15.555 =====Discovery Log Entry 0====== 00:14:15.555 trtype: tcp 00:14:15.555 adrfam: ipv4 00:14:15.555 subtype: current discovery subsystem 00:14:15.555 treq: not required 00:14:15.555 portid: 0 00:14:15.555 trsvcid: 4420 00:14:15.555 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:15.555 traddr: 10.0.0.2 00:14:15.555 eflags: explicit discovery connections, duplicate discovery information 00:14:15.555 sectype: none 00:14:15.555 =====Discovery Log Entry 1====== 00:14:15.555 trtype: tcp 00:14:15.555 adrfam: ipv4 00:14:15.555 subtype: nvme subsystem 00:14:15.555 treq: not required 00:14:15.555 portid: 0 00:14:15.555 trsvcid: 4420 00:14:15.555 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:15.555 traddr: 10.0.0.2 00:14:15.555 eflags: none 00:14:15.555 sectype: none 00:14:15.555 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:15.555 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:15.555 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:15.555 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:15.555 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:15.555 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:15.555 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:15.555 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:15.555 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:15.555 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:15.555 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:16.933 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:16.933 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:16.933 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.933 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:16.933 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:16.933 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:18.839 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:18.839 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:18.839 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:18.839 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:18.839 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:18.839 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:18.839 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:18.839 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:18.839 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:18.839 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:18.839 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:18.839 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:18.839 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:18.839 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:18.839 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:18.839 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:18.839 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:18.839 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:18.839 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:18.839 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:18.839 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:18.839 /dev/nvme0n2 ]] 00:14:18.839 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:18.839 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:18.839 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:18.839 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:18.839 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:19.098 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:19.098 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:19.098 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:19.098 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:19.098 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:19.098 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:19.098 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:19.098 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:19.098 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:19.098 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:19.098 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:19.098 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:19.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:19.357 rmmod nvme_tcp 00:14:19.357 rmmod nvme_fabrics 00:14:19.357 rmmod nvme_keyring 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 382684 ']' 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 382684 00:14:19.357 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 382684 ']' 00:14:19.358 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 382684 00:14:19.358 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:19.358 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:19.358 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 382684 00:14:19.617 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:19.617 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:19.617 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 382684' 00:14:19.617 killing process with pid 382684 00:14:19.617 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 382684 00:14:19.617 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 382684 00:14:19.878 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:19.878 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:19.878 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:19.878 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:19.878 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:14:19.878 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:19.878 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:14:19.878 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:19.878 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:19.878 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.878 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.878 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.785 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:21.785 00:14:21.785 real 0m13.719s 00:14:21.785 user 0m22.538s 00:14:21.785 sys 0m5.180s 00:14:21.785 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:21.785 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:21.785 ************************************ 00:14:21.785 END TEST nvmf_nvme_cli 00:14:21.785 ************************************ 00:14:21.785 18:22:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:21.786 18:22:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:21.786 18:22:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:21.786 18:22:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:21.786 18:22:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:21.786 ************************************ 00:14:21.786 START TEST nvmf_vfio_user 00:14:21.786 ************************************ 00:14:21.786 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:22.046 * Looking for test storage... 00:14:22.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:22.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.046 --rc genhtml_branch_coverage=1 00:14:22.046 --rc genhtml_function_coverage=1 00:14:22.046 --rc genhtml_legend=1 00:14:22.046 --rc geninfo_all_blocks=1 00:14:22.046 --rc geninfo_unexecuted_blocks=1 00:14:22.046 00:14:22.046 ' 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:22.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.046 --rc genhtml_branch_coverage=1 00:14:22.046 --rc genhtml_function_coverage=1 00:14:22.046 --rc genhtml_legend=1 00:14:22.046 --rc geninfo_all_blocks=1 00:14:22.046 --rc geninfo_unexecuted_blocks=1 00:14:22.046 00:14:22.046 ' 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:22.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.046 --rc genhtml_branch_coverage=1 00:14:22.046 --rc genhtml_function_coverage=1 00:14:22.046 --rc genhtml_legend=1 00:14:22.046 --rc geninfo_all_blocks=1 00:14:22.046 --rc geninfo_unexecuted_blocks=1 00:14:22.046 00:14:22.046 ' 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:22.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.046 --rc genhtml_branch_coverage=1 00:14:22.046 --rc genhtml_function_coverage=1 00:14:22.046 --rc genhtml_legend=1 00:14:22.046 --rc geninfo_all_blocks=1 00:14:22.046 --rc geninfo_unexecuted_blocks=1 00:14:22.046 00:14:22.046 ' 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.046 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:22.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=384136 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 384136' 00:14:22.047 Process pid: 384136 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 384136 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 384136 ']' 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:22.047 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:22.306 [2024-10-08 18:22:15.367204] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:14:22.306 [2024-10-08 18:22:15.367253] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.306 [2024-10-08 18:22:15.436457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:22.306 [2024-10-08 18:22:15.508173] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.306 [2024-10-08 18:22:15.508216] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.306 [2024-10-08 18:22:15.508223] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.306 [2024-10-08 18:22:15.508229] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.306 [2024-10-08 18:22:15.508234] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.306 [2024-10-08 18:22:15.509831] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.306 [2024-10-08 18:22:15.509923] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.306 [2024-10-08 18:22:15.510038] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.306 [2024-10-08 18:22:15.510039] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:22.874 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:22.874 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:22.874 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:24.335 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:24.335 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:24.335 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:24.335 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:24.335 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:24.335 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:24.335 Malloc1 00:14:24.335 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:24.594 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:24.852 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:25.110 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:25.110 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:25.110 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:25.110 Malloc2 00:14:25.110 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:25.368 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:25.625 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:25.886 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:25.886 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:25.886 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:25.886 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:25.886 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:25.886 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:25.886 [2024-10-08 18:22:18.986469] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:14:25.886 [2024-10-08 18:22:18.986494] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384698 ] 00:14:25.886 [2024-10-08 18:22:19.011664] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:25.886 [2024-10-08 18:22:19.023186] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:25.886 [2024-10-08 18:22:19.023209] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f56fe2c9000 00:14:25.886 [2024-10-08 18:22:19.024190] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:25.886 [2024-10-08 18:22:19.025183] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:25.886 [2024-10-08 18:22:19.026190] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:25.886 [2024-10-08 18:22:19.027197] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:25.886 [2024-10-08 18:22:19.028202] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:25.886 [2024-10-08 18:22:19.029216] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:25.886 [2024-10-08 18:22:19.030213] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:25.886 [2024-10-08 18:22:19.031222] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:25.886 [2024-10-08 18:22:19.032235] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:25.886 [2024-10-08 18:22:19.032245] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f56fe2be000 00:14:25.886 [2024-10-08 18:22:19.033304] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:25.886 [2024-10-08 18:22:19.049654] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:25.886 [2024-10-08 18:22:19.049680] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:25.886 [2024-10-08 18:22:19.052351] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:25.886 [2024-10-08 18:22:19.052389] nvme_pcie_common.c: 149:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:25.886 [2024-10-08 18:22:19.052458] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:25.886 [2024-10-08 18:22:19.052476] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:25.886 [2024-10-08 18:22:19.052482] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:25.886 [2024-10-08 18:22:19.053353] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:25.886 [2024-10-08 18:22:19.053363] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:25.886 [2024-10-08 18:22:19.053369] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:25.886 [2024-10-08 18:22:19.054358] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:25.886 [2024-10-08 18:22:19.054366] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:25.886 [2024-10-08 18:22:19.054373] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:25.886 [2024-10-08 18:22:19.055363] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:25.886 [2024-10-08 18:22:19.055370] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:25.886 [2024-10-08 18:22:19.056364] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:25.886 [2024-10-08 18:22:19.056371] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:25.886 [2024-10-08 18:22:19.056379] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:25.886 [2024-10-08 18:22:19.056385] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:25.886 [2024-10-08 18:22:19.056490] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:25.886 [2024-10-08 18:22:19.056495] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:25.886 [2024-10-08 18:22:19.056499] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:25.886 [2024-10-08 18:22:19.057378] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:25.886 [2024-10-08 18:22:19.058383] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:25.886 [2024-10-08 18:22:19.059388] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:25.886 [2024-10-08 18:22:19.060382] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:25.887 [2024-10-08 18:22:19.060457] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:25.887 [2024-10-08 18:22:19.061395] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:25.887 [2024-10-08 18:22:19.061402] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:25.887 [2024-10-08 18:22:19.061407] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061423] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:25.887 [2024-10-08 18:22:19.061431] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061445] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:25.887 [2024-10-08 18:22:19.061450] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:25.887 [2024-10-08 18:22:19.061453] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:25.887 [2024-10-08 18:22:19.061466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:25.887 [2024-10-08 18:22:19.061514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:25.887 [2024-10-08 18:22:19.061523] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:25.887 [2024-10-08 18:22:19.061528] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:25.887 [2024-10-08 18:22:19.061533] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:25.887 [2024-10-08 18:22:19.061537] nvme_ctrlr.c:2115:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:25.887 [2024-10-08 18:22:19.061542] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:25.887 [2024-10-08 18:22:19.061546] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:25.887 [2024-10-08 18:22:19.061551] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061558] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061568] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:25.887 [2024-10-08 18:22:19.061582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:25.887 [2024-10-08 18:22:19.061592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.887 [2024-10-08 18:22:19.061602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.887 [2024-10-08 18:22:19.061609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.887 [2024-10-08 18:22:19.061617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.887 [2024-10-08 18:22:19.061621] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061630] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061638] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:25.887 [2024-10-08 18:22:19.061647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:25.887 [2024-10-08 18:22:19.061653] nvme_ctrlr.c:3065:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:25.887 [2024-10-08 18:22:19.061658] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061665] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061671] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061679] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:25.887 [2024-10-08 18:22:19.061688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:25.887 [2024-10-08 18:22:19.061736] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061743] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061750] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:25.887 [2024-10-08 18:22:19.061754] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:25.887 [2024-10-08 18:22:19.061757] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:25.887 [2024-10-08 18:22:19.061763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:25.887 [2024-10-08 18:22:19.061773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:25.887 [2024-10-08 18:22:19.061783] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:25.887 [2024-10-08 18:22:19.061791] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061798] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061804] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:25.887 [2024-10-08 18:22:19.061808] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:25.887 [2024-10-08 18:22:19.061813] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:25.887 [2024-10-08 18:22:19.061818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:25.887 [2024-10-08 18:22:19.061843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:25.887 [2024-10-08 18:22:19.061853] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061859] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061865] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:25.887 [2024-10-08 18:22:19.061870] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:25.887 [2024-10-08 18:22:19.061873] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:25.887 [2024-10-08 18:22:19.061878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:25.887 [2024-10-08 18:22:19.061888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:25.887 [2024-10-08 18:22:19.061897] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061903] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061910] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061915] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061920] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061925] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061929] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:25.887 [2024-10-08 18:22:19.061934] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:25.887 [2024-10-08 18:22:19.061938] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:25.887 [2024-10-08 18:22:19.061955] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:25.887 [2024-10-08 18:22:19.061964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:25.887 [2024-10-08 18:22:19.061975] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:25.887 [2024-10-08 18:22:19.061983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:25.887 [2024-10-08 18:22:19.061993] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:25.887 [2024-10-08 18:22:19.062001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:25.887 [2024-10-08 18:22:19.062011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:25.887 [2024-10-08 18:22:19.062023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:25.887 [2024-10-08 18:22:19.062034] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:25.887 [2024-10-08 18:22:19.062039] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:25.887 [2024-10-08 18:22:19.062042] nvme_pcie_common.c:1265:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:25.887 [2024-10-08 18:22:19.062045] nvme_pcie_common.c:1281:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:25.887 [2024-10-08 18:22:19.062048] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:25.887 [2024-10-08 18:22:19.062054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:25.887 [2024-10-08 18:22:19.062060] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:25.887 [2024-10-08 18:22:19.062064] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:25.887 [2024-10-08 18:22:19.062067] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:25.887 [2024-10-08 18:22:19.062073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:25.887 [2024-10-08 18:22:19.062080] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:25.887 [2024-10-08 18:22:19.062083] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:25.887 [2024-10-08 18:22:19.062087] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:25.888 [2024-10-08 18:22:19.062092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:25.888 [2024-10-08 18:22:19.062099] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:25.888 [2024-10-08 18:22:19.062103] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:25.888 [2024-10-08 18:22:19.062106] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:25.888 [2024-10-08 18:22:19.062111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:25.888 [2024-10-08 18:22:19.062117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:25.888 [2024-10-08 18:22:19.062127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:25.888 [2024-10-08 18:22:19.062136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:25.888 [2024-10-08 18:22:19.062142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:25.888 ===================================================== 00:14:25.888 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:25.888 ===================================================== 00:14:25.888 Controller Capabilities/Features 00:14:25.888 ================================ 00:14:25.888 Vendor ID: 4e58 00:14:25.888 Subsystem Vendor ID: 4e58 00:14:25.888 Serial Number: SPDK1 00:14:25.888 Model Number: SPDK bdev Controller 00:14:25.888 Firmware Version: 25.01 00:14:25.888 Recommended Arb Burst: 6 00:14:25.888 IEEE OUI Identifier: 8d 6b 50 00:14:25.888 Multi-path I/O 00:14:25.888 May have multiple subsystem ports: Yes 00:14:25.888 May have multiple controllers: Yes 00:14:25.888 Associated with SR-IOV VF: No 00:14:25.888 Max Data Transfer Size: 131072 00:14:25.888 Max Number of Namespaces: 32 00:14:25.888 Max Number of I/O Queues: 127 00:14:25.888 NVMe Specification Version (VS): 1.3 00:14:25.888 NVMe Specification Version (Identify): 1.3 00:14:25.888 Maximum Queue Entries: 256 00:14:25.888 Contiguous Queues Required: Yes 00:14:25.888 Arbitration Mechanisms Supported 00:14:25.888 Weighted Round Robin: Not Supported 00:14:25.888 Vendor Specific: Not Supported 00:14:25.888 Reset Timeout: 15000 ms 00:14:25.888 Doorbell Stride: 4 bytes 00:14:25.888 NVM Subsystem Reset: Not Supported 00:14:25.888 Command Sets Supported 00:14:25.888 NVM Command Set: Supported 00:14:25.888 Boot Partition: Not Supported 00:14:25.888 Memory Page Size Minimum: 4096 bytes 00:14:25.888 Memory Page Size Maximum: 4096 bytes 00:14:25.888 Persistent Memory Region: Not Supported 00:14:25.888 Optional Asynchronous Events Supported 00:14:25.888 Namespace Attribute Notices: Supported 00:14:25.888 Firmware Activation Notices: Not Supported 00:14:25.888 ANA Change Notices: Not Supported 00:14:25.888 PLE Aggregate Log Change Notices: Not Supported 00:14:25.888 LBA Status Info Alert Notices: Not Supported 00:14:25.888 EGE Aggregate Log Change Notices: Not Supported 00:14:25.888 Normal NVM Subsystem Shutdown event: Not Supported 00:14:25.888 Zone Descriptor Change Notices: Not Supported 00:14:25.888 Discovery Log Change Notices: Not Supported 00:14:25.888 Controller Attributes 00:14:25.888 128-bit Host Identifier: Supported 00:14:25.888 Non-Operational Permissive Mode: Not Supported 00:14:25.888 NVM Sets: Not Supported 00:14:25.888 Read Recovery Levels: Not Supported 00:14:25.888 Endurance Groups: Not Supported 00:14:25.888 Predictable Latency Mode: Not Supported 00:14:25.888 Traffic Based Keep ALive: Not Supported 00:14:25.888 Namespace Granularity: Not Supported 00:14:25.888 SQ Associations: Not Supported 00:14:25.888 UUID List: Not Supported 00:14:25.888 Multi-Domain Subsystem: Not Supported 00:14:25.888 Fixed Capacity Management: Not Supported 00:14:25.888 Variable Capacity Management: Not Supported 00:14:25.888 Delete Endurance Group: Not Supported 00:14:25.888 Delete NVM Set: Not Supported 00:14:25.888 Extended LBA Formats Supported: Not Supported 00:14:25.888 Flexible Data Placement Supported: Not Supported 00:14:25.888 00:14:25.888 Controller Memory Buffer Support 00:14:25.888 ================================ 00:14:25.888 Supported: No 00:14:25.888 00:14:25.888 Persistent Memory Region Support 00:14:25.888 ================================ 00:14:25.888 Supported: No 00:14:25.888 00:14:25.888 Admin Command Set Attributes 00:14:25.888 ============================ 00:14:25.888 Security Send/Receive: Not Supported 00:14:25.888 Format NVM: Not Supported 00:14:25.888 Firmware Activate/Download: Not Supported 00:14:25.888 Namespace Management: Not Supported 00:14:25.888 Device Self-Test: Not Supported 00:14:25.888 Directives: Not Supported 00:14:25.888 NVMe-MI: Not Supported 00:14:25.888 Virtualization Management: Not Supported 00:14:25.888 Doorbell Buffer Config: Not Supported 00:14:25.888 Get LBA Status Capability: Not Supported 00:14:25.888 Command & Feature Lockdown Capability: Not Supported 00:14:25.888 Abort Command Limit: 4 00:14:25.888 Async Event Request Limit: 4 00:14:25.888 Number of Firmware Slots: N/A 00:14:25.888 Firmware Slot 1 Read-Only: N/A 00:14:25.888 Firmware Activation Without Reset: N/A 00:14:25.888 Multiple Update Detection Support: N/A 00:14:25.888 Firmware Update Granularity: No Information Provided 00:14:25.888 Per-Namespace SMART Log: No 00:14:25.888 Asymmetric Namespace Access Log Page: Not Supported 00:14:25.888 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:25.888 Command Effects Log Page: Supported 00:14:25.888 Get Log Page Extended Data: Supported 00:14:25.888 Telemetry Log Pages: Not Supported 00:14:25.888 Persistent Event Log Pages: Not Supported 00:14:25.888 Supported Log Pages Log Page: May Support 00:14:25.888 Commands Supported & Effects Log Page: Not Supported 00:14:25.888 Feature Identifiers & Effects Log Page:May Support 00:14:25.888 NVMe-MI Commands & Effects Log Page: May Support 00:14:25.888 Data Area 4 for Telemetry Log: Not Supported 00:14:25.888 Error Log Page Entries Supported: 128 00:14:25.888 Keep Alive: Supported 00:14:25.888 Keep Alive Granularity: 10000 ms 00:14:25.888 00:14:25.888 NVM Command Set Attributes 00:14:25.888 ========================== 00:14:25.888 Submission Queue Entry Size 00:14:25.888 Max: 64 00:14:25.888 Min: 64 00:14:25.888 Completion Queue Entry Size 00:14:25.888 Max: 16 00:14:25.888 Min: 16 00:14:25.888 Number of Namespaces: 32 00:14:25.888 Compare Command: Supported 00:14:25.888 Write Uncorrectable Command: Not Supported 00:14:25.888 Dataset Management Command: Supported 00:14:25.888 Write Zeroes Command: Supported 00:14:25.888 Set Features Save Field: Not Supported 00:14:25.888 Reservations: Not Supported 00:14:25.888 Timestamp: Not Supported 00:14:25.888 Copy: Supported 00:14:25.888 Volatile Write Cache: Present 00:14:25.888 Atomic Write Unit (Normal): 1 00:14:25.888 Atomic Write Unit (PFail): 1 00:14:25.888 Atomic Compare & Write Unit: 1 00:14:25.888 Fused Compare & Write: Supported 00:14:25.888 Scatter-Gather List 00:14:25.888 SGL Command Set: Supported (Dword aligned) 00:14:25.888 SGL Keyed: Not Supported 00:14:25.888 SGL Bit Bucket Descriptor: Not Supported 00:14:25.888 SGL Metadata Pointer: Not Supported 00:14:25.888 Oversized SGL: Not Supported 00:14:25.888 SGL Metadata Address: Not Supported 00:14:25.888 SGL Offset: Not Supported 00:14:25.888 Transport SGL Data Block: Not Supported 00:14:25.888 Replay Protected Memory Block: Not Supported 00:14:25.888 00:14:25.888 Firmware Slot Information 00:14:25.888 ========================= 00:14:25.888 Active slot: 1 00:14:25.888 Slot 1 Firmware Revision: 25.01 00:14:25.888 00:14:25.888 00:14:25.888 Commands Supported and Effects 00:14:25.888 ============================== 00:14:25.888 Admin Commands 00:14:25.888 -------------- 00:14:25.888 Get Log Page (02h): Supported 00:14:25.888 Identify (06h): Supported 00:14:25.888 Abort (08h): Supported 00:14:25.888 Set Features (09h): Supported 00:14:25.888 Get Features (0Ah): Supported 00:14:25.888 Asynchronous Event Request (0Ch): Supported 00:14:25.888 Keep Alive (18h): Supported 00:14:25.888 I/O Commands 00:14:25.888 ------------ 00:14:25.888 Flush (00h): Supported LBA-Change 00:14:25.888 Write (01h): Supported LBA-Change 00:14:25.888 Read (02h): Supported 00:14:25.888 Compare (05h): Supported 00:14:25.888 Write Zeroes (08h): Supported LBA-Change 00:14:25.888 Dataset Management (09h): Supported LBA-Change 00:14:25.888 Copy (19h): Supported LBA-Change 00:14:25.888 00:14:25.888 Error Log 00:14:25.888 ========= 00:14:25.888 00:14:25.889 Arbitration 00:14:25.889 =========== 00:14:25.889 Arbitration Burst: 1 00:14:25.889 00:14:25.889 Power Management 00:14:25.889 ================ 00:14:25.889 Number of Power States: 1 00:14:25.889 Current Power State: Power State #0 00:14:25.889 Power State #0: 00:14:25.889 Max Power: 0.00 W 00:14:25.889 Non-Operational State: Operational 00:14:25.889 Entry Latency: Not Reported 00:14:25.889 Exit Latency: Not Reported 00:14:25.889 Relative Read Throughput: 0 00:14:25.889 Relative Read Latency: 0 00:14:25.889 Relative Write Throughput: 0 00:14:25.889 Relative Write Latency: 0 00:14:25.889 Idle Power: Not Reported 00:14:25.889 Active Power: Not Reported 00:14:25.889 Non-Operational Permissive Mode: Not Supported 00:14:25.889 00:14:25.889 Health Information 00:14:25.889 ================== 00:14:25.889 Critical Warnings: 00:14:25.889 Available Spare Space: OK 00:14:25.889 Temperature: OK 00:14:25.889 Device Reliability: OK 00:14:25.889 Read Only: No 00:14:25.889 Volatile Memory Backup: OK 00:14:25.889 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:25.889 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:25.889 Available Spare: 0% 00:14:25.889 Available Sp[2024-10-08 18:22:19.062224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:25.889 [2024-10-08 18:22:19.062234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:25.889 [2024-10-08 18:22:19.062259] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:25.889 [2024-10-08 18:22:19.062269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.889 [2024-10-08 18:22:19.062275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.889 [2024-10-08 18:22:19.062282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.889 [2024-10-08 18:22:19.062287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.889 [2024-10-08 18:22:19.065382] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:25.889 [2024-10-08 18:22:19.065393] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:25.889 [2024-10-08 18:22:19.065416] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:25.889 [2024-10-08 18:22:19.065463] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:25.889 [2024-10-08 18:22:19.065469] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:25.889 [2024-10-08 18:22:19.066423] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:25.889 [2024-10-08 18:22:19.066433] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:25.889 [2024-10-08 18:22:19.066481] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:25.889 [2024-10-08 18:22:19.067452] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:25.889 are Threshold: 0% 00:14:25.889 Life Percentage Used: 0% 00:14:25.889 Data Units Read: 0 00:14:25.889 Data Units Written: 0 00:14:25.889 Host Read Commands: 0 00:14:25.889 Host Write Commands: 0 00:14:25.889 Controller Busy Time: 0 minutes 00:14:25.889 Power Cycles: 0 00:14:25.889 Power On Hours: 0 hours 00:14:25.889 Unsafe Shutdowns: 0 00:14:25.889 Unrecoverable Media Errors: 0 00:14:25.889 Lifetime Error Log Entries: 0 00:14:25.889 Warning Temperature Time: 0 minutes 00:14:25.889 Critical Temperature Time: 0 minutes 00:14:25.889 00:14:25.889 Number of Queues 00:14:25.889 ================ 00:14:25.889 Number of I/O Submission Queues: 127 00:14:25.889 Number of I/O Completion Queues: 127 00:14:25.889 00:14:25.889 Active Namespaces 00:14:25.889 ================= 00:14:25.889 Namespace ID:1 00:14:25.889 Error Recovery Timeout: Unlimited 00:14:25.889 Command Set Identifier: NVM (00h) 00:14:25.889 Deallocate: Supported 00:14:25.889 Deallocated/Unwritten Error: Not Supported 00:14:25.889 Deallocated Read Value: Unknown 00:14:25.889 Deallocate in Write Zeroes: Not Supported 00:14:25.889 Deallocated Guard Field: 0xFFFF 00:14:25.889 Flush: Supported 00:14:25.889 Reservation: Supported 00:14:25.889 Namespace Sharing Capabilities: Multiple Controllers 00:14:25.889 Size (in LBAs): 131072 (0GiB) 00:14:25.889 Capacity (in LBAs): 131072 (0GiB) 00:14:25.889 Utilization (in LBAs): 131072 (0GiB) 00:14:25.889 NGUID: 4B1764F0009A4CCB8EA76DE7FD7006AF 00:14:25.889 UUID: 4b1764f0-009a-4ccb-8ea7-6de7fd7006af 00:14:25.889 Thin Provisioning: Not Supported 00:14:25.889 Per-NS Atomic Units: Yes 00:14:25.889 Atomic Boundary Size (Normal): 0 00:14:25.889 Atomic Boundary Size (PFail): 0 00:14:25.889 Atomic Boundary Offset: 0 00:14:25.889 Maximum Single Source Range Length: 65535 00:14:25.889 Maximum Copy Length: 65535 00:14:25.889 Maximum Source Range Count: 1 00:14:25.889 NGUID/EUI64 Never Reused: No 00:14:25.889 Namespace Write Protected: No 00:14:25.889 Number of LBA Formats: 1 00:14:25.889 Current LBA Format: LBA Format #00 00:14:25.889 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:25.889 00:14:25.889 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:26.148 [2024-10-08 18:22:19.268306] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:31.419 Initializing NVMe Controllers 00:14:31.419 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:31.419 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:31.419 Initialization complete. Launching workers. 00:14:31.419 ======================================================== 00:14:31.419 Latency(us) 00:14:31.419 Device Information : IOPS MiB/s Average min max 00:14:31.419 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39950.40 156.06 3203.82 943.77 7229.64 00:14:31.419 ======================================================== 00:14:31.419 Total : 39950.40 156.06 3203.82 943.77 7229.64 00:14:31.419 00:14:31.419 [2024-10-08 18:22:24.289231] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:31.419 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:31.419 [2024-10-08 18:22:24.514303] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:36.691 Initializing NVMe Controllers 00:14:36.691 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:36.691 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:36.691 Initialization complete. Launching workers. 00:14:36.691 ======================================================== 00:14:36.691 Latency(us) 00:14:36.691 Device Information : IOPS MiB/s Average min max 00:14:36.691 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.92 62.70 7979.96 7774.21 8072.56 00:14:36.691 ======================================================== 00:14:36.691 Total : 16050.92 62.70 7979.96 7774.21 8072.56 00:14:36.691 00:14:36.691 [2024-10-08 18:22:29.556940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:36.691 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:36.691 [2024-10-08 18:22:29.740821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:41.965 [2024-10-08 18:22:34.803667] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:41.965 Initializing NVMe Controllers 00:14:41.965 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:41.965 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:41.965 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:41.965 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:41.965 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:41.965 Initialization complete. Launching workers. 00:14:41.965 Starting thread on core 2 00:14:41.965 Starting thread on core 3 00:14:41.965 Starting thread on core 1 00:14:41.965 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:41.965 [2024-10-08 18:22:35.079086] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:45.255 [2024-10-08 18:22:38.152737] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:45.255 Initializing NVMe Controllers 00:14:45.255 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:45.255 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:45.255 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:45.255 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:45.255 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:45.255 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:45.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:45.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:45.255 Initialization complete. Launching workers. 00:14:45.255 Starting thread on core 1 with urgent priority queue 00:14:45.255 Starting thread on core 2 with urgent priority queue 00:14:45.255 Starting thread on core 3 with urgent priority queue 00:14:45.255 Starting thread on core 0 with urgent priority queue 00:14:45.256 SPDK bdev Controller (SPDK1 ) core 0: 6226.67 IO/s 16.06 secs/100000 ios 00:14:45.256 SPDK bdev Controller (SPDK1 ) core 1: 6071.67 IO/s 16.47 secs/100000 ios 00:14:45.256 SPDK bdev Controller (SPDK1 ) core 2: 5635.67 IO/s 17.74 secs/100000 ios 00:14:45.256 SPDK bdev Controller (SPDK1 ) core 3: 4806.33 IO/s 20.81 secs/100000 ios 00:14:45.256 ======================================================== 00:14:45.256 00:14:45.256 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:45.256 [2024-10-08 18:22:38.427869] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:45.256 Initializing NVMe Controllers 00:14:45.256 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:45.256 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:45.256 Namespace ID: 1 size: 0GB 00:14:45.256 Initialization complete. 00:14:45.256 INFO: using host memory buffer for IO 00:14:45.256 Hello world! 00:14:45.256 [2024-10-08 18:22:38.461087] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:45.256 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:45.515 [2024-10-08 18:22:38.727073] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.453 Initializing NVMe Controllers 00:14:46.453 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:46.453 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:46.453 Initialization complete. Launching workers. 00:14:46.453 submit (in ns) avg, min, max = 5488.5, 3177.1, 3998850.5 00:14:46.453 complete (in ns) avg, min, max = 22328.7, 1755.2, 3998504.8 00:14:46.453 00:14:46.453 Submit histogram 00:14:46.453 ================ 00:14:46.453 Range in us Cumulative Count 00:14:46.453 3.170 - 3.185: 0.0177% ( 3) 00:14:46.453 3.185 - 3.200: 0.5381% ( 88) 00:14:46.453 3.200 - 3.215: 3.2289% ( 455) 00:14:46.453 3.215 - 3.230: 8.3501% ( 866) 00:14:46.453 3.230 - 3.246: 13.4477% ( 862) 00:14:46.453 3.246 - 3.261: 20.2898% ( 1157) 00:14:46.453 3.261 - 3.276: 27.1082% ( 1153) 00:14:46.453 3.276 - 3.291: 32.9450% ( 987) 00:14:46.453 3.291 - 3.307: 39.2194% ( 1061) 00:14:46.453 3.307 - 3.322: 44.9734% ( 973) 00:14:46.453 3.322 - 3.337: 50.2543% ( 893) 00:14:46.453 3.337 - 3.352: 55.6535% ( 913) 00:14:46.453 3.352 - 3.368: 64.3879% ( 1477) 00:14:46.453 3.368 - 3.383: 70.6564% ( 1060) 00:14:46.453 3.383 - 3.398: 75.4287% ( 807) 00:14:46.453 3.398 - 3.413: 80.1478% ( 798) 00:14:46.453 3.413 - 3.429: 83.2170% ( 519) 00:14:46.453 3.429 - 3.444: 85.4701% ( 381) 00:14:46.453 3.444 - 3.459: 86.4991% ( 174) 00:14:46.453 3.459 - 3.474: 87.1792% ( 115) 00:14:46.453 3.474 - 3.490: 87.5872% ( 69) 00:14:46.453 3.490 - 3.505: 88.0485% ( 78) 00:14:46.453 3.505 - 3.520: 88.7286% ( 115) 00:14:46.453 3.520 - 3.535: 89.5860% ( 145) 00:14:46.453 3.535 - 3.550: 90.4908% ( 153) 00:14:46.453 3.550 - 3.566: 91.4370% ( 160) 00:14:46.453 3.566 - 3.581: 92.4364% ( 169) 00:14:46.453 3.581 - 3.596: 93.4004% ( 163) 00:14:46.453 3.596 - 3.611: 94.5417% ( 193) 00:14:46.453 3.611 - 3.627: 95.6180% ( 182) 00:14:46.453 3.627 - 3.642: 96.5405% ( 156) 00:14:46.453 3.642 - 3.657: 97.2561% ( 121) 00:14:46.453 3.657 - 3.672: 97.9834% ( 123) 00:14:46.453 3.672 - 3.688: 98.4033% ( 71) 00:14:46.453 3.688 - 3.703: 98.7877% ( 65) 00:14:46.453 3.703 - 3.718: 99.0656% ( 47) 00:14:46.453 3.718 - 3.733: 99.2667% ( 34) 00:14:46.453 3.733 - 3.749: 99.4441% ( 30) 00:14:46.453 3.749 - 3.764: 99.4914% ( 8) 00:14:46.453 3.764 - 3.779: 99.5506% ( 10) 00:14:46.453 3.779 - 3.794: 99.5979% ( 8) 00:14:46.453 3.794 - 3.810: 99.6097% ( 2) 00:14:46.453 3.810 - 3.825: 99.6156% ( 1) 00:14:46.453 3.825 - 3.840: 99.6393% ( 4) 00:14:46.453 3.840 - 3.855: 99.6511% ( 2) 00:14:46.453 4.053 - 4.084: 99.6570% ( 1) 00:14:46.453 4.968 - 4.998: 99.6629% ( 1) 00:14:46.453 4.998 - 5.029: 99.6807% ( 3) 00:14:46.453 5.029 - 5.059: 99.6866% ( 1) 00:14:46.453 5.059 - 5.090: 99.6925% ( 1) 00:14:46.453 5.120 - 5.150: 99.6984% ( 1) 00:14:46.453 5.211 - 5.242: 99.7043% ( 1) 00:14:46.453 5.425 - 5.455: 99.7161% ( 2) 00:14:46.453 5.455 - 5.486: 99.7221% ( 1) 00:14:46.453 5.516 - 5.547: 99.7398% ( 3) 00:14:46.453 5.730 - 5.760: 99.7457% ( 1) 00:14:46.453 5.760 - 5.790: 99.7516% ( 1) 00:14:46.453 5.790 - 5.821: 99.7575% ( 1) 00:14:46.453 5.851 - 5.882: 99.7694% ( 2) 00:14:46.453 5.882 - 5.912: 99.7753% ( 1) 00:14:46.453 6.065 - 6.095: 99.7812% ( 1) 00:14:46.453 6.095 - 6.126: 99.7871% ( 1) 00:14:46.453 6.126 - 6.156: 99.7930% ( 1) 00:14:46.453 6.430 - 6.461: 99.8048% ( 2) 00:14:46.453 6.522 - 6.552: 99.8108% ( 1) 00:14:46.453 6.552 - 6.583: 99.8167% ( 1) 00:14:46.453 6.613 - 6.644: 99.8285% ( 2) 00:14:46.453 6.735 - 6.766: 99.8344% ( 1) 00:14:46.453 6.796 - 6.827: 99.8403% ( 1) 00:14:46.453 6.857 - 6.888: 99.8462% ( 1) 00:14:46.453 6.888 - 6.918: 99.8522% ( 1) 00:14:46.453 6.949 - 6.979: 99.8581% ( 1) 00:14:46.453 7.010 - 7.040: 99.8758% ( 3) 00:14:46.453 7.070 - 7.101: 99.8817% ( 1) 00:14:46.453 7.253 - 7.284: 99.8936% ( 2) 00:14:46.453 7.284 - 7.314: 99.9113% ( 3) 00:14:46.453 7.528 - 7.558: 99.9172% ( 1) 00:14:46.453 7.589 - 7.619: 99.9231% ( 1) 00:14:46.453 7.650 - 7.680: 99.9290% ( 1) 00:14:46.453 [2024-10-08 18:22:39.745829] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:46.712 7.863 - 7.924: 99.9349% ( 1) 00:14:46.712 7.985 - 8.046: 99.9468% ( 2) 00:14:46.713 3994.575 - 4025.783: 100.0000% ( 9) 00:14:46.713 00:14:46.713 Complete histogram 00:14:46.713 ================== 00:14:46.713 Range in us Cumulative Count 00:14:46.713 1.752 - 1.760: 0.0177% ( 3) 00:14:46.713 1.760 - 1.768: 0.2306% ( 36) 00:14:46.713 1.768 - 1.775: 2.7499% ( 426) 00:14:46.713 1.775 - 1.783: 9.6511% ( 1167) 00:14:46.713 1.783 - 1.790: 16.4400% ( 1148) 00:14:46.713 1.790 - 1.798: 20.0591% ( 612) 00:14:46.713 1.798 - 1.806: 21.7031% ( 278) 00:14:46.713 1.806 - 1.813: 22.7617% ( 179) 00:14:46.713 1.813 - 1.821: 26.1916% ( 580) 00:14:46.713 1.821 - 1.829: 40.7924% ( 2469) 00:14:46.713 1.829 - 1.836: 65.2691% ( 4139) 00:14:46.713 1.836 - 1.844: 84.0568% ( 3177) 00:14:46.713 1.844 - 1.851: 91.0704% ( 1186) 00:14:46.713 1.851 - 1.859: 93.8498% ( 470) 00:14:46.713 1.859 - 1.867: 95.6712% ( 308) 00:14:46.713 1.867 - 1.874: 96.7593% ( 184) 00:14:46.713 1.874 - 1.882: 97.2797% ( 88) 00:14:46.713 1.882 - 1.890: 97.6641% ( 65) 00:14:46.713 1.890 - 1.897: 97.9657% ( 51) 00:14:46.713 1.897 - 1.905: 98.3383% ( 63) 00:14:46.713 1.905 - 1.912: 98.7108% ( 63) 00:14:46.713 1.912 - 1.920: 99.0834% ( 63) 00:14:46.713 1.920 - 1.928: 99.2135% ( 22) 00:14:46.713 1.928 - 1.935: 99.2844% ( 12) 00:14:46.713 1.935 - 1.943: 99.3140% ( 5) 00:14:46.713 1.943 - 1.950: 99.3377% ( 4) 00:14:46.713 1.966 - 1.981: 99.3495% ( 2) 00:14:46.713 2.011 - 2.027: 99.3554% ( 1) 00:14:46.713 2.103 - 2.118: 99.3613% ( 1) 00:14:46.713 2.133 - 2.149: 99.3672% ( 1) 00:14:46.713 2.194 - 2.210: 99.3791% ( 2) 00:14:46.713 2.225 - 2.240: 99.3850% ( 1) 00:14:46.713 2.377 - 2.392: 99.3909% ( 1) 00:14:46.713 2.423 - 2.438: 99.3968% ( 1) 00:14:46.713 3.520 - 3.535: 99.4027% ( 1) 00:14:46.713 4.023 - 4.053: 99.4086% ( 1) 00:14:46.713 4.358 - 4.389: 99.4145% ( 1) 00:14:46.713 4.389 - 4.419: 99.4205% ( 1) 00:14:46.713 4.450 - 4.480: 99.4264% ( 1) 00:14:46.713 4.693 - 4.724: 99.4323% ( 1) 00:14:46.713 4.876 - 4.907: 99.4500% ( 3) 00:14:46.713 4.907 - 4.937: 99.4559% ( 1) 00:14:46.713 5.181 - 5.211: 99.4619% ( 1) 00:14:46.713 6.400 - 6.430: 99.4678% ( 1) 00:14:46.713 8.472 - 8.533: 99.4737% ( 1) 00:14:46.713 9.448 - 9.509: 99.4796% ( 1) 00:14:46.713 866.011 - 869.912: 99.4855% ( 1) 00:14:46.713 2995.931 - 3011.535: 99.4914% ( 1) 00:14:46.713 3292.404 - 3308.008: 99.4973% ( 1) 00:14:46.713 3978.971 - 3994.575: 99.5092% ( 2) 00:14:46.713 3994.575 - 4025.783: 100.0000% ( 83) 00:14:46.713 00:14:46.713 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:46.713 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:46.713 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:46.713 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:46.713 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:46.713 [ 00:14:46.713 { 00:14:46.713 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:46.713 "subtype": "Discovery", 00:14:46.713 "listen_addresses": [], 00:14:46.713 "allow_any_host": true, 00:14:46.713 "hosts": [] 00:14:46.713 }, 00:14:46.713 { 00:14:46.713 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:46.713 "subtype": "NVMe", 00:14:46.713 "listen_addresses": [ 00:14:46.713 { 00:14:46.713 "trtype": "VFIOUSER", 00:14:46.713 "adrfam": "IPv4", 00:14:46.713 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:46.713 "trsvcid": "0" 00:14:46.713 } 00:14:46.713 ], 00:14:46.713 "allow_any_host": true, 00:14:46.713 "hosts": [], 00:14:46.713 "serial_number": "SPDK1", 00:14:46.713 "model_number": "SPDK bdev Controller", 00:14:46.713 "max_namespaces": 32, 00:14:46.713 "min_cntlid": 1, 00:14:46.713 "max_cntlid": 65519, 00:14:46.713 "namespaces": [ 00:14:46.713 { 00:14:46.713 "nsid": 1, 00:14:46.713 "bdev_name": "Malloc1", 00:14:46.713 "name": "Malloc1", 00:14:46.713 "nguid": "4B1764F0009A4CCB8EA76DE7FD7006AF", 00:14:46.713 "uuid": "4b1764f0-009a-4ccb-8ea7-6de7fd7006af" 00:14:46.713 } 00:14:46.713 ] 00:14:46.713 }, 00:14:46.713 { 00:14:46.713 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:46.713 "subtype": "NVMe", 00:14:46.713 "listen_addresses": [ 00:14:46.713 { 00:14:46.713 "trtype": "VFIOUSER", 00:14:46.713 "adrfam": "IPv4", 00:14:46.713 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:46.713 "trsvcid": "0" 00:14:46.713 } 00:14:46.713 ], 00:14:46.713 "allow_any_host": true, 00:14:46.713 "hosts": [], 00:14:46.713 "serial_number": "SPDK2", 00:14:46.713 "model_number": "SPDK bdev Controller", 00:14:46.713 "max_namespaces": 32, 00:14:46.713 "min_cntlid": 1, 00:14:46.713 "max_cntlid": 65519, 00:14:46.713 "namespaces": [ 00:14:46.713 { 00:14:46.713 "nsid": 1, 00:14:46.713 "bdev_name": "Malloc2", 00:14:46.713 "name": "Malloc2", 00:14:46.713 "nguid": "265E544878194DF2A5B701830A941B2A", 00:14:46.713 "uuid": "265e5448-7819-4df2-a5b7-01830a941b2a" 00:14:46.713 } 00:14:46.713 ] 00:14:46.713 } 00:14:46.713 ] 00:14:46.713 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:46.713 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:46.713 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=388162 00:14:46.713 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:46.713 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:46.713 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:46.713 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:46.713 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:46.713 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:46.713 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:46.973 [2024-10-08 18:22:40.126841] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.973 Malloc3 00:14:46.973 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:47.231 [2024-10-08 18:22:40.408928] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:47.231 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:47.231 Asynchronous Event Request test 00:14:47.231 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:47.231 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:47.231 Registering asynchronous event callbacks... 00:14:47.231 Starting namespace attribute notice tests for all controllers... 00:14:47.231 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:47.231 aer_cb - Changed Namespace 00:14:47.231 Cleaning up... 00:14:47.492 [ 00:14:47.492 { 00:14:47.492 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:47.492 "subtype": "Discovery", 00:14:47.492 "listen_addresses": [], 00:14:47.492 "allow_any_host": true, 00:14:47.492 "hosts": [] 00:14:47.492 }, 00:14:47.492 { 00:14:47.492 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:47.492 "subtype": "NVMe", 00:14:47.492 "listen_addresses": [ 00:14:47.492 { 00:14:47.492 "trtype": "VFIOUSER", 00:14:47.492 "adrfam": "IPv4", 00:14:47.492 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:47.492 "trsvcid": "0" 00:14:47.492 } 00:14:47.492 ], 00:14:47.492 "allow_any_host": true, 00:14:47.492 "hosts": [], 00:14:47.492 "serial_number": "SPDK1", 00:14:47.492 "model_number": "SPDK bdev Controller", 00:14:47.492 "max_namespaces": 32, 00:14:47.492 "min_cntlid": 1, 00:14:47.492 "max_cntlid": 65519, 00:14:47.492 "namespaces": [ 00:14:47.492 { 00:14:47.492 "nsid": 1, 00:14:47.492 "bdev_name": "Malloc1", 00:14:47.492 "name": "Malloc1", 00:14:47.492 "nguid": "4B1764F0009A4CCB8EA76DE7FD7006AF", 00:14:47.492 "uuid": "4b1764f0-009a-4ccb-8ea7-6de7fd7006af" 00:14:47.492 }, 00:14:47.492 { 00:14:47.492 "nsid": 2, 00:14:47.492 "bdev_name": "Malloc3", 00:14:47.492 "name": "Malloc3", 00:14:47.492 "nguid": "E7581A8AEBBF4886B73CDFCA556942E2", 00:14:47.492 "uuid": "e7581a8a-ebbf-4886-b73c-dfca556942e2" 00:14:47.492 } 00:14:47.492 ] 00:14:47.492 }, 00:14:47.492 { 00:14:47.492 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:47.492 "subtype": "NVMe", 00:14:47.492 "listen_addresses": [ 00:14:47.492 { 00:14:47.492 "trtype": "VFIOUSER", 00:14:47.492 "adrfam": "IPv4", 00:14:47.492 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:47.492 "trsvcid": "0" 00:14:47.492 } 00:14:47.492 ], 00:14:47.492 "allow_any_host": true, 00:14:47.492 "hosts": [], 00:14:47.492 "serial_number": "SPDK2", 00:14:47.492 "model_number": "SPDK bdev Controller", 00:14:47.492 "max_namespaces": 32, 00:14:47.492 "min_cntlid": 1, 00:14:47.492 "max_cntlid": 65519, 00:14:47.492 "namespaces": [ 00:14:47.492 { 00:14:47.492 "nsid": 1, 00:14:47.492 "bdev_name": "Malloc2", 00:14:47.492 "name": "Malloc2", 00:14:47.492 "nguid": "265E544878194DF2A5B701830A941B2A", 00:14:47.492 "uuid": "265e5448-7819-4df2-a5b7-01830a941b2a" 00:14:47.492 } 00:14:47.492 ] 00:14:47.492 } 00:14:47.492 ] 00:14:47.492 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 388162 00:14:47.492 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:47.492 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:47.492 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:47.492 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:47.492 [2024-10-08 18:22:40.660291] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:14:47.492 [2024-10-08 18:22:40.660339] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388383 ] 00:14:47.492 [2024-10-08 18:22:40.688579] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:47.492 [2024-10-08 18:22:40.696580] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:47.492 [2024-10-08 18:22:40.696604] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f20906e6000 00:14:47.492 [2024-10-08 18:22:40.697582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:47.492 [2024-10-08 18:22:40.698583] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:47.492 [2024-10-08 18:22:40.699589] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:47.492 [2024-10-08 18:22:40.700596] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:47.492 [2024-10-08 18:22:40.701603] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:47.492 [2024-10-08 18:22:40.702617] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:47.492 [2024-10-08 18:22:40.703622] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:47.492 [2024-10-08 18:22:40.704629] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:47.492 [2024-10-08 18:22:40.705643] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:47.492 [2024-10-08 18:22:40.705653] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f20906db000 00:14:47.492 [2024-10-08 18:22:40.706567] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:47.492 [2024-10-08 18:22:40.719921] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:47.492 [2024-10-08 18:22:40.719945] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:47.492 [2024-10-08 18:22:40.721992] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:47.492 [2024-10-08 18:22:40.722028] nvme_pcie_common.c: 149:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:47.492 [2024-10-08 18:22:40.722099] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:47.492 [2024-10-08 18:22:40.722115] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:47.492 [2024-10-08 18:22:40.722120] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:47.492 [2024-10-08 18:22:40.723002] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:47.492 [2024-10-08 18:22:40.723012] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:47.492 [2024-10-08 18:22:40.723018] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:47.492 [2024-10-08 18:22:40.724008] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:47.492 [2024-10-08 18:22:40.724016] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:47.492 [2024-10-08 18:22:40.724023] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:47.492 [2024-10-08 18:22:40.725016] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:47.492 [2024-10-08 18:22:40.725025] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:47.492 [2024-10-08 18:22:40.726020] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:47.492 [2024-10-08 18:22:40.726028] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:47.492 [2024-10-08 18:22:40.726033] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:47.492 [2024-10-08 18:22:40.726039] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:47.492 [2024-10-08 18:22:40.726145] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:47.492 [2024-10-08 18:22:40.726149] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:47.492 [2024-10-08 18:22:40.726154] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:47.492 [2024-10-08 18:22:40.727030] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:47.492 [2024-10-08 18:22:40.728037] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:47.492 [2024-10-08 18:22:40.729048] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:47.492 [2024-10-08 18:22:40.730054] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:47.492 [2024-10-08 18:22:40.730090] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:47.492 [2024-10-08 18:22:40.731064] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:47.492 [2024-10-08 18:22:40.731072] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:47.492 [2024-10-08 18:22:40.731077] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:47.492 [2024-10-08 18:22:40.731094] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:47.492 [2024-10-08 18:22:40.731105] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:47.492 [2024-10-08 18:22:40.731119] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:47.492 [2024-10-08 18:22:40.731124] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:47.492 [2024-10-08 18:22:40.731127] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:47.492 [2024-10-08 18:22:40.731139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:47.492 [2024-10-08 18:22:40.737381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:47.493 [2024-10-08 18:22:40.737395] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:47.493 [2024-10-08 18:22:40.737400] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:47.493 [2024-10-08 18:22:40.737404] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:47.493 [2024-10-08 18:22:40.737408] nvme_ctrlr.c:2115:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:47.493 [2024-10-08 18:22:40.737412] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:47.493 [2024-10-08 18:22:40.737416] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:47.493 [2024-10-08 18:22:40.737421] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.737428] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.737440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:47.493 [2024-10-08 18:22:40.745379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:47.493 [2024-10-08 18:22:40.745401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:47.493 [2024-10-08 18:22:40.745409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:47.493 [2024-10-08 18:22:40.745416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:47.493 [2024-10-08 18:22:40.745423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:47.493 [2024-10-08 18:22:40.745427] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.745435] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.745444] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:47.493 [2024-10-08 18:22:40.753381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:47.493 [2024-10-08 18:22:40.753389] nvme_ctrlr.c:3065:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:47.493 [2024-10-08 18:22:40.753394] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.753404] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.753409] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.753417] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:47.493 [2024-10-08 18:22:40.761379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:47.493 [2024-10-08 18:22:40.761430] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.761437] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.761444] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:47.493 [2024-10-08 18:22:40.761449] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:47.493 [2024-10-08 18:22:40.761452] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:47.493 [2024-10-08 18:22:40.761458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:47.493 [2024-10-08 18:22:40.769380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:47.493 [2024-10-08 18:22:40.769393] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:47.493 [2024-10-08 18:22:40.769400] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.769407] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.769413] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:47.493 [2024-10-08 18:22:40.769417] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:47.493 [2024-10-08 18:22:40.769420] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:47.493 [2024-10-08 18:22:40.769426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:47.493 [2024-10-08 18:22:40.777380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:47.493 [2024-10-08 18:22:40.777391] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.777398] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.777405] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:47.493 [2024-10-08 18:22:40.777409] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:47.493 [2024-10-08 18:22:40.777412] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:47.493 [2024-10-08 18:22:40.777418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:47.493 [2024-10-08 18:22:40.785380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:47.493 [2024-10-08 18:22:40.785393] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.785401] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.785410] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.785415] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.785419] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.785424] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.785428] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:47.493 [2024-10-08 18:22:40.785432] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:47.493 [2024-10-08 18:22:40.785437] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:47.493 [2024-10-08 18:22:40.785452] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:47.493 [2024-10-08 18:22:40.793380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:47.493 [2024-10-08 18:22:40.793392] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:47.493 [2024-10-08 18:22:40.801381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:47.493 [2024-10-08 18:22:40.801393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:47.493 [2024-10-08 18:22:40.809379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:47.493 [2024-10-08 18:22:40.809391] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:47.754 [2024-10-08 18:22:40.817382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:47.754 [2024-10-08 18:22:40.817399] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:47.754 [2024-10-08 18:22:40.817403] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:47.754 [2024-10-08 18:22:40.817407] nvme_pcie_common.c:1265:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:47.754 [2024-10-08 18:22:40.817410] nvme_pcie_common.c:1281:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:47.754 [2024-10-08 18:22:40.817413] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:47.754 [2024-10-08 18:22:40.817419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:47.754 [2024-10-08 18:22:40.817426] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:47.754 [2024-10-08 18:22:40.817430] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:47.754 [2024-10-08 18:22:40.817433] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:47.754 [2024-10-08 18:22:40.817438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:47.754 [2024-10-08 18:22:40.817447] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:47.754 [2024-10-08 18:22:40.817451] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:47.754 [2024-10-08 18:22:40.817454] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:47.754 [2024-10-08 18:22:40.817459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:47.754 [2024-10-08 18:22:40.817466] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:47.754 [2024-10-08 18:22:40.817470] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:47.754 [2024-10-08 18:22:40.817473] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:47.754 [2024-10-08 18:22:40.817478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:47.754 [2024-10-08 18:22:40.825381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:47.754 [2024-10-08 18:22:40.825394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:47.754 [2024-10-08 18:22:40.825403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:47.754 [2024-10-08 18:22:40.825409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:47.754 ===================================================== 00:14:47.754 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:47.754 ===================================================== 00:14:47.754 Controller Capabilities/Features 00:14:47.754 ================================ 00:14:47.754 Vendor ID: 4e58 00:14:47.754 Subsystem Vendor ID: 4e58 00:14:47.754 Serial Number: SPDK2 00:14:47.754 Model Number: SPDK bdev Controller 00:14:47.754 Firmware Version: 25.01 00:14:47.754 Recommended Arb Burst: 6 00:14:47.754 IEEE OUI Identifier: 8d 6b 50 00:14:47.754 Multi-path I/O 00:14:47.754 May have multiple subsystem ports: Yes 00:14:47.754 May have multiple controllers: Yes 00:14:47.754 Associated with SR-IOV VF: No 00:14:47.754 Max Data Transfer Size: 131072 00:14:47.754 Max Number of Namespaces: 32 00:14:47.754 Max Number of I/O Queues: 127 00:14:47.754 NVMe Specification Version (VS): 1.3 00:14:47.754 NVMe Specification Version (Identify): 1.3 00:14:47.754 Maximum Queue Entries: 256 00:14:47.754 Contiguous Queues Required: Yes 00:14:47.754 Arbitration Mechanisms Supported 00:14:47.754 Weighted Round Robin: Not Supported 00:14:47.754 Vendor Specific: Not Supported 00:14:47.754 Reset Timeout: 15000 ms 00:14:47.754 Doorbell Stride: 4 bytes 00:14:47.754 NVM Subsystem Reset: Not Supported 00:14:47.754 Command Sets Supported 00:14:47.754 NVM Command Set: Supported 00:14:47.754 Boot Partition: Not Supported 00:14:47.754 Memory Page Size Minimum: 4096 bytes 00:14:47.754 Memory Page Size Maximum: 4096 bytes 00:14:47.754 Persistent Memory Region: Not Supported 00:14:47.754 Optional Asynchronous Events Supported 00:14:47.754 Namespace Attribute Notices: Supported 00:14:47.754 Firmware Activation Notices: Not Supported 00:14:47.754 ANA Change Notices: Not Supported 00:14:47.754 PLE Aggregate Log Change Notices: Not Supported 00:14:47.754 LBA Status Info Alert Notices: Not Supported 00:14:47.754 EGE Aggregate Log Change Notices: Not Supported 00:14:47.754 Normal NVM Subsystem Shutdown event: Not Supported 00:14:47.754 Zone Descriptor Change Notices: Not Supported 00:14:47.754 Discovery Log Change Notices: Not Supported 00:14:47.754 Controller Attributes 00:14:47.754 128-bit Host Identifier: Supported 00:14:47.754 Non-Operational Permissive Mode: Not Supported 00:14:47.754 NVM Sets: Not Supported 00:14:47.754 Read Recovery Levels: Not Supported 00:14:47.754 Endurance Groups: Not Supported 00:14:47.754 Predictable Latency Mode: Not Supported 00:14:47.754 Traffic Based Keep ALive: Not Supported 00:14:47.754 Namespace Granularity: Not Supported 00:14:47.754 SQ Associations: Not Supported 00:14:47.754 UUID List: Not Supported 00:14:47.754 Multi-Domain Subsystem: Not Supported 00:14:47.754 Fixed Capacity Management: Not Supported 00:14:47.754 Variable Capacity Management: Not Supported 00:14:47.754 Delete Endurance Group: Not Supported 00:14:47.754 Delete NVM Set: Not Supported 00:14:47.754 Extended LBA Formats Supported: Not Supported 00:14:47.754 Flexible Data Placement Supported: Not Supported 00:14:47.754 00:14:47.754 Controller Memory Buffer Support 00:14:47.754 ================================ 00:14:47.754 Supported: No 00:14:47.754 00:14:47.754 Persistent Memory Region Support 00:14:47.754 ================================ 00:14:47.754 Supported: No 00:14:47.754 00:14:47.754 Admin Command Set Attributes 00:14:47.754 ============================ 00:14:47.754 Security Send/Receive: Not Supported 00:14:47.754 Format NVM: Not Supported 00:14:47.754 Firmware Activate/Download: Not Supported 00:14:47.754 Namespace Management: Not Supported 00:14:47.754 Device Self-Test: Not Supported 00:14:47.754 Directives: Not Supported 00:14:47.754 NVMe-MI: Not Supported 00:14:47.754 Virtualization Management: Not Supported 00:14:47.754 Doorbell Buffer Config: Not Supported 00:14:47.754 Get LBA Status Capability: Not Supported 00:14:47.754 Command & Feature Lockdown Capability: Not Supported 00:14:47.754 Abort Command Limit: 4 00:14:47.754 Async Event Request Limit: 4 00:14:47.754 Number of Firmware Slots: N/A 00:14:47.754 Firmware Slot 1 Read-Only: N/A 00:14:47.754 Firmware Activation Without Reset: N/A 00:14:47.754 Multiple Update Detection Support: N/A 00:14:47.754 Firmware Update Granularity: No Information Provided 00:14:47.754 Per-Namespace SMART Log: No 00:14:47.754 Asymmetric Namespace Access Log Page: Not Supported 00:14:47.754 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:47.754 Command Effects Log Page: Supported 00:14:47.754 Get Log Page Extended Data: Supported 00:14:47.754 Telemetry Log Pages: Not Supported 00:14:47.754 Persistent Event Log Pages: Not Supported 00:14:47.754 Supported Log Pages Log Page: May Support 00:14:47.754 Commands Supported & Effects Log Page: Not Supported 00:14:47.754 Feature Identifiers & Effects Log Page:May Support 00:14:47.754 NVMe-MI Commands & Effects Log Page: May Support 00:14:47.754 Data Area 4 for Telemetry Log: Not Supported 00:14:47.754 Error Log Page Entries Supported: 128 00:14:47.754 Keep Alive: Supported 00:14:47.754 Keep Alive Granularity: 10000 ms 00:14:47.754 00:14:47.754 NVM Command Set Attributes 00:14:47.754 ========================== 00:14:47.754 Submission Queue Entry Size 00:14:47.754 Max: 64 00:14:47.754 Min: 64 00:14:47.754 Completion Queue Entry Size 00:14:47.754 Max: 16 00:14:47.754 Min: 16 00:14:47.754 Number of Namespaces: 32 00:14:47.754 Compare Command: Supported 00:14:47.754 Write Uncorrectable Command: Not Supported 00:14:47.754 Dataset Management Command: Supported 00:14:47.754 Write Zeroes Command: Supported 00:14:47.754 Set Features Save Field: Not Supported 00:14:47.754 Reservations: Not Supported 00:14:47.754 Timestamp: Not Supported 00:14:47.754 Copy: Supported 00:14:47.754 Volatile Write Cache: Present 00:14:47.754 Atomic Write Unit (Normal): 1 00:14:47.754 Atomic Write Unit (PFail): 1 00:14:47.754 Atomic Compare & Write Unit: 1 00:14:47.754 Fused Compare & Write: Supported 00:14:47.754 Scatter-Gather List 00:14:47.754 SGL Command Set: Supported (Dword aligned) 00:14:47.754 SGL Keyed: Not Supported 00:14:47.754 SGL Bit Bucket Descriptor: Not Supported 00:14:47.754 SGL Metadata Pointer: Not Supported 00:14:47.754 Oversized SGL: Not Supported 00:14:47.754 SGL Metadata Address: Not Supported 00:14:47.754 SGL Offset: Not Supported 00:14:47.754 Transport SGL Data Block: Not Supported 00:14:47.754 Replay Protected Memory Block: Not Supported 00:14:47.754 00:14:47.754 Firmware Slot Information 00:14:47.754 ========================= 00:14:47.754 Active slot: 1 00:14:47.755 Slot 1 Firmware Revision: 25.01 00:14:47.755 00:14:47.755 00:14:47.755 Commands Supported and Effects 00:14:47.755 ============================== 00:14:47.755 Admin Commands 00:14:47.755 -------------- 00:14:47.755 Get Log Page (02h): Supported 00:14:47.755 Identify (06h): Supported 00:14:47.755 Abort (08h): Supported 00:14:47.755 Set Features (09h): Supported 00:14:47.755 Get Features (0Ah): Supported 00:14:47.755 Asynchronous Event Request (0Ch): Supported 00:14:47.755 Keep Alive (18h): Supported 00:14:47.755 I/O Commands 00:14:47.755 ------------ 00:14:47.755 Flush (00h): Supported LBA-Change 00:14:47.755 Write (01h): Supported LBA-Change 00:14:47.755 Read (02h): Supported 00:14:47.755 Compare (05h): Supported 00:14:47.755 Write Zeroes (08h): Supported LBA-Change 00:14:47.755 Dataset Management (09h): Supported LBA-Change 00:14:47.755 Copy (19h): Supported LBA-Change 00:14:47.755 00:14:47.755 Error Log 00:14:47.755 ========= 00:14:47.755 00:14:47.755 Arbitration 00:14:47.755 =========== 00:14:47.755 Arbitration Burst: 1 00:14:47.755 00:14:47.755 Power Management 00:14:47.755 ================ 00:14:47.755 Number of Power States: 1 00:14:47.755 Current Power State: Power State #0 00:14:47.755 Power State #0: 00:14:47.755 Max Power: 0.00 W 00:14:47.755 Non-Operational State: Operational 00:14:47.755 Entry Latency: Not Reported 00:14:47.755 Exit Latency: Not Reported 00:14:47.755 Relative Read Throughput: 0 00:14:47.755 Relative Read Latency: 0 00:14:47.755 Relative Write Throughput: 0 00:14:47.755 Relative Write Latency: 0 00:14:47.755 Idle Power: Not Reported 00:14:47.755 Active Power: Not Reported 00:14:47.755 Non-Operational Permissive Mode: Not Supported 00:14:47.755 00:14:47.755 Health Information 00:14:47.755 ================== 00:14:47.755 Critical Warnings: 00:14:47.755 Available Spare Space: OK 00:14:47.755 Temperature: OK 00:14:47.755 Device Reliability: OK 00:14:47.755 Read Only: No 00:14:47.755 Volatile Memory Backup: OK 00:14:47.755 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:47.755 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:47.755 Available Spare: 0% 00:14:47.755 Available Sp[2024-10-08 18:22:40.825493] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:47.755 [2024-10-08 18:22:40.833379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:47.755 [2024-10-08 18:22:40.833413] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:47.755 [2024-10-08 18:22:40.833422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:47.755 [2024-10-08 18:22:40.833428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:47.755 [2024-10-08 18:22:40.833433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:47.755 [2024-10-08 18:22:40.833438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:47.755 [2024-10-08 18:22:40.833482] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:47.755 [2024-10-08 18:22:40.833492] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:47.755 [2024-10-08 18:22:40.834480] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:47.755 [2024-10-08 18:22:40.834523] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:47.755 [2024-10-08 18:22:40.834529] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:47.755 [2024-10-08 18:22:40.835491] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:47.755 [2024-10-08 18:22:40.835503] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:47.755 [2024-10-08 18:22:40.835548] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:47.755 [2024-10-08 18:22:40.838381] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:47.755 are Threshold: 0% 00:14:47.755 Life Percentage Used: 0% 00:14:47.755 Data Units Read: 0 00:14:47.755 Data Units Written: 0 00:14:47.755 Host Read Commands: 0 00:14:47.755 Host Write Commands: 0 00:14:47.755 Controller Busy Time: 0 minutes 00:14:47.755 Power Cycles: 0 00:14:47.755 Power On Hours: 0 hours 00:14:47.755 Unsafe Shutdowns: 0 00:14:47.755 Unrecoverable Media Errors: 0 00:14:47.755 Lifetime Error Log Entries: 0 00:14:47.755 Warning Temperature Time: 0 minutes 00:14:47.755 Critical Temperature Time: 0 minutes 00:14:47.755 00:14:47.755 Number of Queues 00:14:47.755 ================ 00:14:47.755 Number of I/O Submission Queues: 127 00:14:47.755 Number of I/O Completion Queues: 127 00:14:47.755 00:14:47.755 Active Namespaces 00:14:47.755 ================= 00:14:47.755 Namespace ID:1 00:14:47.755 Error Recovery Timeout: Unlimited 00:14:47.755 Command Set Identifier: NVM (00h) 00:14:47.755 Deallocate: Supported 00:14:47.755 Deallocated/Unwritten Error: Not Supported 00:14:47.755 Deallocated Read Value: Unknown 00:14:47.755 Deallocate in Write Zeroes: Not Supported 00:14:47.755 Deallocated Guard Field: 0xFFFF 00:14:47.755 Flush: Supported 00:14:47.755 Reservation: Supported 00:14:47.755 Namespace Sharing Capabilities: Multiple Controllers 00:14:47.755 Size (in LBAs): 131072 (0GiB) 00:14:47.755 Capacity (in LBAs): 131072 (0GiB) 00:14:47.755 Utilization (in LBAs): 131072 (0GiB) 00:14:47.755 NGUID: 265E544878194DF2A5B701830A941B2A 00:14:47.755 UUID: 265e5448-7819-4df2-a5b7-01830a941b2a 00:14:47.755 Thin Provisioning: Not Supported 00:14:47.755 Per-NS Atomic Units: Yes 00:14:47.755 Atomic Boundary Size (Normal): 0 00:14:47.755 Atomic Boundary Size (PFail): 0 00:14:47.755 Atomic Boundary Offset: 0 00:14:47.755 Maximum Single Source Range Length: 65535 00:14:47.755 Maximum Copy Length: 65535 00:14:47.755 Maximum Source Range Count: 1 00:14:47.755 NGUID/EUI64 Never Reused: No 00:14:47.755 Namespace Write Protected: No 00:14:47.755 Number of LBA Formats: 1 00:14:47.755 Current LBA Format: LBA Format #00 00:14:47.755 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:47.755 00:14:47.755 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:47.755 [2024-10-08 18:22:41.056547] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:53.028 Initializing NVMe Controllers 00:14:53.028 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:53.028 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:53.028 Initialization complete. Launching workers. 00:14:53.028 ======================================================== 00:14:53.028 Latency(us) 00:14:53.028 Device Information : IOPS MiB/s Average min max 00:14:53.028 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39952.72 156.07 3203.37 935.24 8612.86 00:14:53.028 ======================================================== 00:14:53.028 Total : 39952.72 156.07 3203.37 935.24 8612.86 00:14:53.028 00:14:53.028 [2024-10-08 18:22:46.161623] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:53.028 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:53.286 [2024-10-08 18:22:46.380262] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:58.560 Initializing NVMe Controllers 00:14:58.560 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:58.560 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:58.560 Initialization complete. Launching workers. 00:14:58.560 ======================================================== 00:14:58.560 Latency(us) 00:14:58.560 Device Information : IOPS MiB/s Average min max 00:14:58.560 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39935.98 156.00 3205.38 953.24 7654.81 00:14:58.560 ======================================================== 00:14:58.560 Total : 39935.98 156.00 3205.38 953.24 7654.81 00:14:58.560 00:14:58.560 [2024-10-08 18:22:51.400930] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:58.560 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:58.560 [2024-10-08 18:22:51.594152] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:03.832 [2024-10-08 18:22:56.754472] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:03.832 Initializing NVMe Controllers 00:15:03.832 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:03.832 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:03.832 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:03.832 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:03.832 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:03.832 Initialization complete. Launching workers. 00:15:03.832 Starting thread on core 2 00:15:03.832 Starting thread on core 3 00:15:03.832 Starting thread on core 1 00:15:03.832 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:03.832 [2024-10-08 18:22:57.037830] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:07.119 [2024-10-08 18:23:00.088404] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:07.119 Initializing NVMe Controllers 00:15:07.119 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:07.119 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:07.119 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:07.119 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:07.119 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:07.119 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:07.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:07.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:07.119 Initialization complete. Launching workers. 00:15:07.119 Starting thread on core 1 with urgent priority queue 00:15:07.119 Starting thread on core 2 with urgent priority queue 00:15:07.119 Starting thread on core 3 with urgent priority queue 00:15:07.119 Starting thread on core 0 with urgent priority queue 00:15:07.119 SPDK bdev Controller (SPDK2 ) core 0: 10634.00 IO/s 9.40 secs/100000 ios 00:15:07.119 SPDK bdev Controller (SPDK2 ) core 1: 8975.67 IO/s 11.14 secs/100000 ios 00:15:07.119 SPDK bdev Controller (SPDK2 ) core 2: 9294.67 IO/s 10.76 secs/100000 ios 00:15:07.119 SPDK bdev Controller (SPDK2 ) core 3: 10437.33 IO/s 9.58 secs/100000 ios 00:15:07.119 ======================================================== 00:15:07.119 00:15:07.119 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:07.119 [2024-10-08 18:23:00.359288] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:07.119 Initializing NVMe Controllers 00:15:07.119 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:07.119 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:07.119 Namespace ID: 1 size: 0GB 00:15:07.119 Initialization complete. 00:15:07.119 INFO: using host memory buffer for IO 00:15:07.119 Hello world! 00:15:07.119 [2024-10-08 18:23:00.369355] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:07.119 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:07.379 [2024-10-08 18:23:00.636184] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:08.758 Initializing NVMe Controllers 00:15:08.758 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:08.758 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:08.758 Initialization complete. Launching workers. 00:15:08.758 submit (in ns) avg, min, max = 6725.5, 3144.8, 4000200.0 00:15:08.758 complete (in ns) avg, min, max = 21980.0, 1713.3, 4005763.8 00:15:08.758 00:15:08.758 Submit histogram 00:15:08.758 ================ 00:15:08.758 Range in us Cumulative Count 00:15:08.758 3.139 - 3.154: 0.0238% ( 4) 00:15:08.758 3.154 - 3.170: 0.0833% ( 10) 00:15:08.758 3.170 - 3.185: 0.1726% ( 15) 00:15:08.758 3.185 - 3.200: 0.3214% ( 25) 00:15:08.758 3.200 - 3.215: 0.7381% ( 70) 00:15:08.758 3.215 - 3.230: 2.5476% ( 304) 00:15:08.758 3.230 - 3.246: 5.9643% ( 574) 00:15:08.758 3.246 - 3.261: 10.4405% ( 752) 00:15:08.758 3.261 - 3.276: 15.1488% ( 791) 00:15:08.758 3.276 - 3.291: 20.8512% ( 958) 00:15:08.758 3.291 - 3.307: 26.4881% ( 947) 00:15:08.758 3.307 - 3.322: 32.1607% ( 953) 00:15:08.758 3.322 - 3.337: 38.2321% ( 1020) 00:15:08.758 3.337 - 3.352: 44.3571% ( 1029) 00:15:08.758 3.352 - 3.368: 49.9881% ( 946) 00:15:08.758 3.368 - 3.383: 56.1905% ( 1042) 00:15:08.758 3.383 - 3.398: 62.3095% ( 1028) 00:15:08.758 3.398 - 3.413: 67.1250% ( 809) 00:15:08.758 3.413 - 3.429: 72.3036% ( 870) 00:15:08.758 3.429 - 3.444: 76.7262% ( 743) 00:15:08.758 3.444 - 3.459: 80.6250% ( 655) 00:15:08.758 3.459 - 3.474: 82.9940% ( 398) 00:15:08.758 3.474 - 3.490: 85.2143% ( 373) 00:15:08.758 3.490 - 3.505: 86.5000% ( 216) 00:15:08.758 3.505 - 3.520: 87.4821% ( 165) 00:15:08.758 3.520 - 3.535: 88.4464% ( 162) 00:15:08.758 3.535 - 3.550: 89.3571% ( 153) 00:15:08.758 3.550 - 3.566: 89.9583% ( 101) 00:15:08.758 3.566 - 3.581: 90.6548% ( 117) 00:15:08.758 3.581 - 3.596: 91.4464% ( 133) 00:15:08.758 3.596 - 3.611: 92.2560% ( 136) 00:15:08.758 3.611 - 3.627: 93.1726% ( 154) 00:15:08.758 3.627 - 3.642: 94.2440% ( 180) 00:15:08.758 3.642 - 3.657: 95.0952% ( 143) 00:15:08.758 3.657 - 3.672: 96.0119% ( 154) 00:15:08.758 3.672 - 3.688: 96.6607% ( 109) 00:15:08.758 3.688 - 3.703: 97.4464% ( 132) 00:15:08.758 3.703 - 3.718: 98.0714% ( 105) 00:15:08.758 3.718 - 3.733: 98.5060% ( 73) 00:15:08.758 3.733 - 3.749: 98.8750% ( 62) 00:15:08.758 3.749 - 3.764: 99.0417% ( 28) 00:15:08.758 3.764 - 3.779: 99.2143% ( 29) 00:15:08.758 3.779 - 3.794: 99.3393% ( 21) 00:15:08.758 3.794 - 3.810: 99.4464% ( 18) 00:15:08.758 3.810 - 3.825: 99.5298% ( 14) 00:15:08.758 3.825 - 3.840: 99.5774% ( 8) 00:15:08.758 3.840 - 3.855: 99.5833% ( 1) 00:15:08.758 3.855 - 3.870: 99.6012% ( 3) 00:15:08.758 3.870 - 3.886: 99.6190% ( 3) 00:15:08.758 3.886 - 3.901: 99.6250% ( 1) 00:15:08.758 3.931 - 3.962: 99.6310% ( 1) 00:15:08.758 4.023 - 4.053: 99.6369% ( 1) 00:15:08.758 5.333 - 5.364: 99.6429% ( 1) 00:15:08.758 5.425 - 5.455: 99.6548% ( 2) 00:15:08.758 5.547 - 5.577: 99.6667% ( 2) 00:15:08.758 5.730 - 5.760: 99.6786% ( 2) 00:15:08.758 5.882 - 5.912: 99.6905% ( 2) 00:15:08.758 5.912 - 5.943: 99.6964% ( 1) 00:15:08.758 6.065 - 6.095: 99.7024% ( 1) 00:15:08.758 6.095 - 6.126: 99.7083% ( 1) 00:15:08.758 6.156 - 6.187: 99.7143% ( 1) 00:15:08.758 6.187 - 6.217: 99.7202% ( 1) 00:15:08.758 6.248 - 6.278: 99.7321% ( 2) 00:15:08.758 6.339 - 6.370: 99.7381% ( 1) 00:15:08.758 6.400 - 6.430: 99.7440% ( 1) 00:15:08.758 6.491 - 6.522: 99.7560% ( 2) 00:15:08.758 6.522 - 6.552: 99.7679% ( 2) 00:15:08.758 6.552 - 6.583: 99.7917% ( 4) 00:15:08.758 6.583 - 6.613: 99.7976% ( 1) 00:15:08.758 6.674 - 6.705: 99.8036% ( 1) 00:15:08.758 6.705 - 6.735: 99.8155% ( 2) 00:15:08.758 6.766 - 6.796: 99.8214% ( 1) 00:15:08.758 6.827 - 6.857: 99.8274% ( 1) 00:15:08.758 6.857 - 6.888: 99.8333% ( 1) 00:15:08.758 6.918 - 6.949: 99.8393% ( 1) 00:15:08.758 7.010 - 7.040: 99.8512% ( 2) 00:15:08.758 7.223 - 7.253: 99.8631% ( 2) 00:15:08.758 7.436 - 7.467: 99.8750% ( 2) 00:15:08.758 7.528 - 7.558: 99.8810% ( 1) 00:15:08.758 [2024-10-08 18:23:01.728387] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:08.758 7.710 - 7.741: 99.8869% ( 1) 00:15:08.758 7.771 - 7.802: 99.8929% ( 1) 00:15:08.758 7.924 - 7.985: 99.8988% ( 1) 00:15:08.758 8.411 - 8.472: 99.9048% ( 1) 00:15:08.758 11.947 - 12.008: 99.9107% ( 1) 00:15:08.758 19.261 - 19.383: 99.9167% ( 1) 00:15:08.758 3994.575 - 4025.783: 100.0000% ( 14) 00:15:08.758 00:15:08.758 Complete histogram 00:15:08.758 ================== 00:15:08.758 Range in us Cumulative Count 00:15:08.758 1.707 - 1.714: 0.0119% ( 2) 00:15:08.758 1.714 - 1.722: 0.1190% ( 18) 00:15:08.758 1.722 - 1.730: 0.4940% ( 63) 00:15:08.758 1.730 - 1.737: 0.8393% ( 58) 00:15:08.758 1.737 - 1.745: 1.0000% ( 27) 00:15:08.758 1.745 - 1.752: 1.0536% ( 9) 00:15:08.758 1.752 - 1.760: 1.1429% ( 15) 00:15:08.758 1.760 - 1.768: 3.4107% ( 381) 00:15:08.758 1.768 - 1.775: 19.7619% ( 2747) 00:15:08.758 1.775 - 1.783: 52.1488% ( 5441) 00:15:08.758 1.783 - 1.790: 72.0893% ( 3350) 00:15:08.758 1.790 - 1.798: 76.7143% ( 777) 00:15:08.758 1.798 - 1.806: 78.9821% ( 381) 00:15:08.758 1.806 - 1.813: 81.0476% ( 347) 00:15:08.758 1.813 - 1.821: 82.2619% ( 204) 00:15:08.758 1.821 - 1.829: 83.2917% ( 173) 00:15:08.758 1.829 - 1.836: 85.9524% ( 447) 00:15:08.758 1.836 - 1.844: 90.9405% ( 838) 00:15:08.758 1.844 - 1.851: 94.6607% ( 625) 00:15:08.759 1.851 - 1.859: 96.2976% ( 275) 00:15:08.759 1.859 - 1.867: 97.3274% ( 173) 00:15:08.759 1.867 - 1.874: 98.0595% ( 123) 00:15:08.759 1.874 - 1.882: 98.4940% ( 73) 00:15:08.759 1.882 - 1.890: 98.7083% ( 36) 00:15:08.759 1.890 - 1.897: 98.8750% ( 28) 00:15:08.759 1.897 - 1.905: 99.0000% ( 21) 00:15:08.759 1.905 - 1.912: 99.0655% ( 11) 00:15:08.759 1.912 - 1.920: 99.1131% ( 8) 00:15:08.759 1.920 - 1.928: 99.1548% ( 7) 00:15:08.759 1.928 - 1.935: 99.2024% ( 8) 00:15:08.759 1.935 - 1.943: 99.2143% ( 2) 00:15:08.759 1.950 - 1.966: 99.2321% ( 3) 00:15:08.759 1.966 - 1.981: 99.2500% ( 3) 00:15:08.759 1.981 - 1.996: 99.2679% ( 3) 00:15:08.759 2.011 - 2.027: 99.2738% ( 1) 00:15:08.759 2.057 - 2.072: 99.2798% ( 1) 00:15:08.759 2.286 - 2.301: 99.2857% ( 1) 00:15:08.759 4.145 - 4.175: 99.2976% ( 2) 00:15:08.759 4.175 - 4.206: 99.3036% ( 1) 00:15:08.759 4.328 - 4.358: 99.3095% ( 1) 00:15:08.759 4.358 - 4.389: 99.3155% ( 1) 00:15:08.759 4.389 - 4.419: 99.3214% ( 1) 00:15:08.759 4.419 - 4.450: 99.3274% ( 1) 00:15:08.759 4.510 - 4.541: 99.3333% ( 1) 00:15:08.759 4.571 - 4.602: 99.3393% ( 1) 00:15:08.759 4.846 - 4.876: 99.3452% ( 1) 00:15:08.759 4.876 - 4.907: 99.3571% ( 2) 00:15:08.759 4.937 - 4.968: 99.3631% ( 1) 00:15:08.759 4.968 - 4.998: 99.3690% ( 1) 00:15:08.759 5.090 - 5.120: 99.3750% ( 1) 00:15:08.759 5.211 - 5.242: 99.3810% ( 1) 00:15:08.759 5.242 - 5.272: 99.3869% ( 1) 00:15:08.759 5.303 - 5.333: 99.3929% ( 1) 00:15:08.759 5.333 - 5.364: 99.3988% ( 1) 00:15:08.759 5.364 - 5.394: 99.4048% ( 1) 00:15:08.759 5.394 - 5.425: 99.4107% ( 1) 00:15:08.759 5.425 - 5.455: 99.4167% ( 1) 00:15:08.759 5.882 - 5.912: 99.4226% ( 1) 00:15:08.759 5.943 - 5.973: 99.4345% ( 2) 00:15:08.759 6.004 - 6.034: 99.4405% ( 1) 00:15:08.759 6.126 - 6.156: 99.4464% ( 1) 00:15:08.759 6.370 - 6.400: 99.4524% ( 1) 00:15:08.759 6.522 - 6.552: 99.4583% ( 1) 00:15:08.759 8.046 - 8.107: 99.4643% ( 1) 00:15:08.759 8.107 - 8.168: 99.4702% ( 1) 00:15:08.759 8.533 - 8.594: 99.4762% ( 1) 00:15:08.759 8.960 - 9.021: 99.4821% ( 1) 00:15:08.759 9.143 - 9.204: 99.4881% ( 1) 00:15:08.759 10.789 - 10.850: 99.4940% ( 1) 00:15:08.759 3479.650 - 3495.253: 99.5000% ( 1) 00:15:08.759 3994.575 - 4025.783: 100.0000% ( 84) 00:15:08.759 00:15:08.759 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:08.759 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:08.759 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:08.759 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:08.759 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:08.759 [ 00:15:08.759 { 00:15:08.759 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:08.759 "subtype": "Discovery", 00:15:08.759 "listen_addresses": [], 00:15:08.759 "allow_any_host": true, 00:15:08.759 "hosts": [] 00:15:08.759 }, 00:15:08.759 { 00:15:08.759 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:08.759 "subtype": "NVMe", 00:15:08.759 "listen_addresses": [ 00:15:08.759 { 00:15:08.759 "trtype": "VFIOUSER", 00:15:08.759 "adrfam": "IPv4", 00:15:08.759 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:08.759 "trsvcid": "0" 00:15:08.759 } 00:15:08.759 ], 00:15:08.759 "allow_any_host": true, 00:15:08.759 "hosts": [], 00:15:08.759 "serial_number": "SPDK1", 00:15:08.759 "model_number": "SPDK bdev Controller", 00:15:08.759 "max_namespaces": 32, 00:15:08.759 "min_cntlid": 1, 00:15:08.759 "max_cntlid": 65519, 00:15:08.759 "namespaces": [ 00:15:08.759 { 00:15:08.759 "nsid": 1, 00:15:08.759 "bdev_name": "Malloc1", 00:15:08.759 "name": "Malloc1", 00:15:08.759 "nguid": "4B1764F0009A4CCB8EA76DE7FD7006AF", 00:15:08.759 "uuid": "4b1764f0-009a-4ccb-8ea7-6de7fd7006af" 00:15:08.759 }, 00:15:08.759 { 00:15:08.759 "nsid": 2, 00:15:08.759 "bdev_name": "Malloc3", 00:15:08.759 "name": "Malloc3", 00:15:08.759 "nguid": "E7581A8AEBBF4886B73CDFCA556942E2", 00:15:08.759 "uuid": "e7581a8a-ebbf-4886-b73c-dfca556942e2" 00:15:08.759 } 00:15:08.759 ] 00:15:08.759 }, 00:15:08.759 { 00:15:08.759 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:08.759 "subtype": "NVMe", 00:15:08.759 "listen_addresses": [ 00:15:08.759 { 00:15:08.759 "trtype": "VFIOUSER", 00:15:08.759 "adrfam": "IPv4", 00:15:08.759 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:08.759 "trsvcid": "0" 00:15:08.759 } 00:15:08.759 ], 00:15:08.759 "allow_any_host": true, 00:15:08.759 "hosts": [], 00:15:08.759 "serial_number": "SPDK2", 00:15:08.759 "model_number": "SPDK bdev Controller", 00:15:08.759 "max_namespaces": 32, 00:15:08.759 "min_cntlid": 1, 00:15:08.759 "max_cntlid": 65519, 00:15:08.759 "namespaces": [ 00:15:08.759 { 00:15:08.759 "nsid": 1, 00:15:08.759 "bdev_name": "Malloc2", 00:15:08.759 "name": "Malloc2", 00:15:08.759 "nguid": "265E544878194DF2A5B701830A941B2A", 00:15:08.759 "uuid": "265e5448-7819-4df2-a5b7-01830a941b2a" 00:15:08.759 } 00:15:08.759 ] 00:15:08.759 } 00:15:08.759 ] 00:15:08.759 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:08.759 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=391838 00:15:08.759 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:08.759 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:08.759 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:08.759 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:08.759 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:08.759 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:08.759 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:08.759 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:09.018 [2024-10-08 18:23:02.113817] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:09.018 Malloc4 00:15:09.018 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:09.278 [2024-10-08 18:23:02.356606] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:09.278 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:09.278 Asynchronous Event Request test 00:15:09.278 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:09.278 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:09.278 Registering asynchronous event callbacks... 00:15:09.278 Starting namespace attribute notice tests for all controllers... 00:15:09.278 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:09.278 aer_cb - Changed Namespace 00:15:09.278 Cleaning up... 00:15:09.278 [ 00:15:09.278 { 00:15:09.278 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:09.278 "subtype": "Discovery", 00:15:09.278 "listen_addresses": [], 00:15:09.278 "allow_any_host": true, 00:15:09.278 "hosts": [] 00:15:09.278 }, 00:15:09.278 { 00:15:09.278 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:09.278 "subtype": "NVMe", 00:15:09.278 "listen_addresses": [ 00:15:09.278 { 00:15:09.278 "trtype": "VFIOUSER", 00:15:09.278 "adrfam": "IPv4", 00:15:09.278 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:09.278 "trsvcid": "0" 00:15:09.278 } 00:15:09.278 ], 00:15:09.278 "allow_any_host": true, 00:15:09.278 "hosts": [], 00:15:09.278 "serial_number": "SPDK1", 00:15:09.278 "model_number": "SPDK bdev Controller", 00:15:09.278 "max_namespaces": 32, 00:15:09.278 "min_cntlid": 1, 00:15:09.278 "max_cntlid": 65519, 00:15:09.278 "namespaces": [ 00:15:09.278 { 00:15:09.278 "nsid": 1, 00:15:09.278 "bdev_name": "Malloc1", 00:15:09.278 "name": "Malloc1", 00:15:09.278 "nguid": "4B1764F0009A4CCB8EA76DE7FD7006AF", 00:15:09.278 "uuid": "4b1764f0-009a-4ccb-8ea7-6de7fd7006af" 00:15:09.278 }, 00:15:09.278 { 00:15:09.278 "nsid": 2, 00:15:09.278 "bdev_name": "Malloc3", 00:15:09.278 "name": "Malloc3", 00:15:09.278 "nguid": "E7581A8AEBBF4886B73CDFCA556942E2", 00:15:09.278 "uuid": "e7581a8a-ebbf-4886-b73c-dfca556942e2" 00:15:09.278 } 00:15:09.278 ] 00:15:09.278 }, 00:15:09.278 { 00:15:09.278 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:09.278 "subtype": "NVMe", 00:15:09.278 "listen_addresses": [ 00:15:09.278 { 00:15:09.278 "trtype": "VFIOUSER", 00:15:09.278 "adrfam": "IPv4", 00:15:09.278 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:09.278 "trsvcid": "0" 00:15:09.278 } 00:15:09.278 ], 00:15:09.278 "allow_any_host": true, 00:15:09.278 "hosts": [], 00:15:09.278 "serial_number": "SPDK2", 00:15:09.278 "model_number": "SPDK bdev Controller", 00:15:09.278 "max_namespaces": 32, 00:15:09.278 "min_cntlid": 1, 00:15:09.278 "max_cntlid": 65519, 00:15:09.278 "namespaces": [ 00:15:09.278 { 00:15:09.278 "nsid": 1, 00:15:09.278 "bdev_name": "Malloc2", 00:15:09.278 "name": "Malloc2", 00:15:09.278 "nguid": "265E544878194DF2A5B701830A941B2A", 00:15:09.278 "uuid": "265e5448-7819-4df2-a5b7-01830a941b2a" 00:15:09.278 }, 00:15:09.278 { 00:15:09.278 "nsid": 2, 00:15:09.278 "bdev_name": "Malloc4", 00:15:09.278 "name": "Malloc4", 00:15:09.278 "nguid": "A76861C584DC4C04A7D7BE6AC003DB24", 00:15:09.278 "uuid": "a76861c5-84dc-4c04-a7d7-be6ac003db24" 00:15:09.278 } 00:15:09.278 ] 00:15:09.278 } 00:15:09.278 ] 00:15:09.278 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 391838 00:15:09.278 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:09.278 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 384136 00:15:09.278 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 384136 ']' 00:15:09.278 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 384136 00:15:09.278 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:09.278 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:09.278 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 384136 00:15:09.537 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:09.537 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:09.537 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 384136' 00:15:09.537 killing process with pid 384136 00:15:09.537 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 384136 00:15:09.537 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 384136 00:15:09.796 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:09.796 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:09.796 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:09.796 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:09.796 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:09.796 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=392072 00:15:09.796 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 392072' 00:15:09.796 Process pid: 392072 00:15:09.796 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:09.796 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:09.796 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 392072 00:15:09.796 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 392072 ']' 00:15:09.796 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.796 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:09.796 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.796 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:09.796 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:09.797 [2024-10-08 18:23:02.952803] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:09.797 [2024-10-08 18:23:02.953700] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:15:09.797 [2024-10-08 18:23:02.953738] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.797 [2024-10-08 18:23:03.019936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.797 [2024-10-08 18:23:03.086541] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.797 [2024-10-08 18:23:03.086585] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.797 [2024-10-08 18:23:03.086592] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.797 [2024-10-08 18:23:03.086598] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.797 [2024-10-08 18:23:03.086606] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.797 [2024-10-08 18:23:03.088225] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.797 [2024-10-08 18:23:03.088332] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.797 [2024-10-08 18:23:03.088451] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.797 [2024-10-08 18:23:03.088452] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.056 [2024-10-08 18:23:03.174367] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:10.056 [2024-10-08 18:23:03.174671] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:10.056 [2024-10-08 18:23:03.175132] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:10.056 [2024-10-08 18:23:03.175478] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:10.056 [2024-10-08 18:23:03.175498] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:10.621 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:10.621 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:10.621 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:11.551 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:11.809 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:11.809 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:11.809 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:11.809 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:11.809 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:12.068 Malloc1 00:15:12.068 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:12.373 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:12.373 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:12.659 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:12.659 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:12.659 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:12.918 Malloc2 00:15:12.918 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:12.918 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:13.177 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:13.436 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:13.436 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 392072 00:15:13.436 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 392072 ']' 00:15:13.436 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 392072 00:15:13.436 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:13.436 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:13.436 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 392072 00:15:13.436 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:13.436 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:13.436 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 392072' 00:15:13.436 killing process with pid 392072 00:15:13.436 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 392072 00:15:13.436 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 392072 00:15:13.694 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:13.694 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:13.694 00:15:13.694 real 0m51.805s 00:15:13.694 user 3m18.090s 00:15:13.694 sys 0m3.238s 00:15:13.694 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:13.694 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:13.694 ************************************ 00:15:13.694 END TEST nvmf_vfio_user 00:15:13.694 ************************************ 00:15:13.694 18:23:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:13.694 18:23:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:13.694 18:23:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:13.694 18:23:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:13.694 ************************************ 00:15:13.694 START TEST nvmf_vfio_user_nvme_compliance 00:15:13.694 ************************************ 00:15:13.694 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:13.954 * Looking for test storage... 00:15:13.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:13.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.954 --rc genhtml_branch_coverage=1 00:15:13.954 --rc genhtml_function_coverage=1 00:15:13.954 --rc genhtml_legend=1 00:15:13.954 --rc geninfo_all_blocks=1 00:15:13.954 --rc geninfo_unexecuted_blocks=1 00:15:13.954 00:15:13.954 ' 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:13.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.954 --rc genhtml_branch_coverage=1 00:15:13.954 --rc genhtml_function_coverage=1 00:15:13.954 --rc genhtml_legend=1 00:15:13.954 --rc geninfo_all_blocks=1 00:15:13.954 --rc geninfo_unexecuted_blocks=1 00:15:13.954 00:15:13.954 ' 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:13.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.954 --rc genhtml_branch_coverage=1 00:15:13.954 --rc genhtml_function_coverage=1 00:15:13.954 --rc genhtml_legend=1 00:15:13.954 --rc geninfo_all_blocks=1 00:15:13.954 --rc geninfo_unexecuted_blocks=1 00:15:13.954 00:15:13.954 ' 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:13.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.954 --rc genhtml_branch_coverage=1 00:15:13.954 --rc genhtml_function_coverage=1 00:15:13.954 --rc genhtml_legend=1 00:15:13.954 --rc geninfo_all_blocks=1 00:15:13.954 --rc geninfo_unexecuted_blocks=1 00:15:13.954 00:15:13.954 ' 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.954 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:13.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=392848 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 392848' 00:15:13.955 Process pid: 392848 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 392848 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 392848 ']' 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:13.955 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:13.955 [2024-10-08 18:23:07.235371] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:15:13.955 [2024-10-08 18:23:07.235429] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.211 [2024-10-08 18:23:07.301130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:14.211 [2024-10-08 18:23:07.378620] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.211 [2024-10-08 18:23:07.378657] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.211 [2024-10-08 18:23:07.378664] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.211 [2024-10-08 18:23:07.378671] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.211 [2024-10-08 18:23:07.378676] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.211 [2024-10-08 18:23:07.379634] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.211 [2024-10-08 18:23:07.379668] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.211 [2024-10-08 18:23:07.379669] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.776 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.776 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:14.776 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:16.149 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:16.149 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:16.149 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:16.149 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.149 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.149 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.149 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:16.149 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:16.149 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.149 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.149 malloc0 00:15:16.149 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.150 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:16.150 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.150 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.150 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.150 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:16.150 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.150 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.150 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.150 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:16.150 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.150 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.150 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.150 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:16.150 00:15:16.150 00:15:16.150 CUnit - A unit testing framework for C - Version 2.1-3 00:15:16.150 http://cunit.sourceforge.net/ 00:15:16.150 00:15:16.150 00:15:16.150 Suite: nvme_compliance 00:15:16.150 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-08 18:23:09.297797] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.150 [2024-10-08 18:23:09.299142] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:16.150 [2024-10-08 18:23:09.299157] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:16.150 [2024-10-08 18:23:09.299163] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:16.150 [2024-10-08 18:23:09.300820] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.150 passed 00:15:16.150 Test: admin_identify_ctrlr_verify_fused ...[2024-10-08 18:23:09.379351] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.150 [2024-10-08 18:23:09.382371] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.150 passed 00:15:16.150 Test: admin_identify_ns ...[2024-10-08 18:23:09.461110] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.408 [2024-10-08 18:23:09.521387] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:16.409 [2024-10-08 18:23:09.529394] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:16.409 [2024-10-08 18:23:09.550481] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.409 passed 00:15:16.409 Test: admin_get_features_mandatory_features ...[2024-10-08 18:23:09.626420] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.409 [2024-10-08 18:23:09.629438] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.409 passed 00:15:16.409 Test: admin_get_features_optional_features ...[2024-10-08 18:23:09.707951] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.409 [2024-10-08 18:23:09.710970] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.667 passed 00:15:16.667 Test: admin_set_features_number_of_queues ...[2024-10-08 18:23:09.788688] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.667 [2024-10-08 18:23:09.901488] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.667 passed 00:15:16.667 Test: admin_get_log_page_mandatory_logs ...[2024-10-08 18:23:09.973138] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.667 [2024-10-08 18:23:09.977168] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.927 passed 00:15:16.927 Test: admin_get_log_page_with_lpo ...[2024-10-08 18:23:10.058735] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.927 [2024-10-08 18:23:10.124398] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:16.927 [2024-10-08 18:23:10.137449] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.927 passed 00:15:16.927 Test: fabric_property_get ...[2024-10-08 18:23:10.214145] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.927 [2024-10-08 18:23:10.215390] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:16.927 [2024-10-08 18:23:10.217169] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.927 passed 00:15:17.187 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-08 18:23:10.294687] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.187 [2024-10-08 18:23:10.295912] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:17.187 [2024-10-08 18:23:10.297707] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.187 passed 00:15:17.187 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-08 18:23:10.375743] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.187 [2024-10-08 18:23:10.460384] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:17.187 [2024-10-08 18:23:10.476380] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:17.187 [2024-10-08 18:23:10.481533] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.187 passed 00:15:17.446 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-08 18:23:10.555417] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.446 [2024-10-08 18:23:10.556652] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:17.446 [2024-10-08 18:23:10.558439] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.446 passed 00:15:17.446 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-08 18:23:10.637771] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.446 [2024-10-08 18:23:10.710381] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:17.446 [2024-10-08 18:23:10.734385] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:17.446 [2024-10-08 18:23:10.739459] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.446 passed 00:15:17.704 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-08 18:23:10.815128] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.704 [2024-10-08 18:23:10.816371] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:17.704 [2024-10-08 18:23:10.816399] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:17.704 [2024-10-08 18:23:10.818151] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.704 passed 00:15:17.704 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-08 18:23:10.894660] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.704 [2024-10-08 18:23:10.990387] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:17.704 [2024-10-08 18:23:10.998382] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:17.704 [2024-10-08 18:23:11.006388] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:17.704 [2024-10-08 18:23:11.014391] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:17.962 [2024-10-08 18:23:11.043468] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.962 passed 00:15:17.962 Test: admin_create_io_sq_verify_pc ...[2024-10-08 18:23:11.117210] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.962 [2024-10-08 18:23:11.132388] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:17.962 [2024-10-08 18:23:11.150386] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.962 passed 00:15:17.962 Test: admin_create_io_qp_max_qps ...[2024-10-08 18:23:11.227887] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.338 [2024-10-08 18:23:12.331385] nvme_ctrlr.c:5535:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:19.597 [2024-10-08 18:23:12.724618] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.597 passed 00:15:19.597 Test: admin_create_io_sq_shared_cq ...[2024-10-08 18:23:12.801650] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.857 [2024-10-08 18:23:12.934382] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:19.857 [2024-10-08 18:23:12.971444] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.857 passed 00:15:19.857 00:15:19.857 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.857 suites 1 1 n/a 0 0 00:15:19.857 tests 18 18 18 0 0 00:15:19.857 asserts 360 360 360 0 n/a 00:15:19.857 00:15:19.857 Elapsed time = 1.511 seconds 00:15:19.857 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 392848 00:15:19.857 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 392848 ']' 00:15:19.857 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 392848 00:15:19.858 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:19.858 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.858 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 392848 00:15:19.858 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:19.858 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:19.858 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 392848' 00:15:19.858 killing process with pid 392848 00:15:19.858 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 392848 00:15:19.858 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 392848 00:15:20.117 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:20.117 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:20.117 00:15:20.117 real 0m6.299s 00:15:20.117 user 0m17.731s 00:15:20.117 sys 0m0.562s 00:15:20.117 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:20.117 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:20.117 ************************************ 00:15:20.117 END TEST nvmf_vfio_user_nvme_compliance 00:15:20.117 ************************************ 00:15:20.117 18:23:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:20.117 18:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:20.117 18:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.117 18:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:20.117 ************************************ 00:15:20.117 START TEST nvmf_vfio_user_fuzz 00:15:20.117 ************************************ 00:15:20.117 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:20.117 * Looking for test storage... 00:15:20.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:20.117 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:20.377 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:15:20.377 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:20.377 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:20.377 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:20.377 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:20.377 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:20.377 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:20.377 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:20.377 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:20.377 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:20.377 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:20.377 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:20.377 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:20.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.378 --rc genhtml_branch_coverage=1 00:15:20.378 --rc genhtml_function_coverage=1 00:15:20.378 --rc genhtml_legend=1 00:15:20.378 --rc geninfo_all_blocks=1 00:15:20.378 --rc geninfo_unexecuted_blocks=1 00:15:20.378 00:15:20.378 ' 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:20.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.378 --rc genhtml_branch_coverage=1 00:15:20.378 --rc genhtml_function_coverage=1 00:15:20.378 --rc genhtml_legend=1 00:15:20.378 --rc geninfo_all_blocks=1 00:15:20.378 --rc geninfo_unexecuted_blocks=1 00:15:20.378 00:15:20.378 ' 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:20.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.378 --rc genhtml_branch_coverage=1 00:15:20.378 --rc genhtml_function_coverage=1 00:15:20.378 --rc genhtml_legend=1 00:15:20.378 --rc geninfo_all_blocks=1 00:15:20.378 --rc geninfo_unexecuted_blocks=1 00:15:20.378 00:15:20.378 ' 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:20.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.378 --rc genhtml_branch_coverage=1 00:15:20.378 --rc genhtml_function_coverage=1 00:15:20.378 --rc genhtml_legend=1 00:15:20.378 --rc geninfo_all_blocks=1 00:15:20.378 --rc geninfo_unexecuted_blocks=1 00:15:20.378 00:15:20.378 ' 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:20.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=393841 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 393841' 00:15:20.378 Process pid: 393841 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 393841 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 393841 ']' 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:20.378 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.379 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:20.379 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:21.314 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.314 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:21.314 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:22.253 malloc0 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:22.253 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:54.465 Fuzzing completed. Shutting down the fuzz application 00:15:54.465 00:15:54.465 Dumping successful admin opcodes: 00:15:54.465 8, 9, 10, 24, 00:15:54.465 Dumping successful io opcodes: 00:15:54.465 0, 00:15:54.465 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1123398, total successful commands: 4422, random_seed: 172742592 00:15:54.465 NS: 0x200003a1ef00 admin qp, Total commands completed: 276861, total successful commands: 2235, random_seed: 3386175488 00:15:54.465 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:54.465 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.465 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.465 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.465 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 393841 00:15:54.465 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 393841 ']' 00:15:54.465 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 393841 00:15:54.465 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:15:54.465 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:54.465 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 393841 00:15:54.465 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:54.465 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:54.465 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 393841' 00:15:54.465 killing process with pid 393841 00:15:54.465 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 393841 00:15:54.465 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 393841 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:54.465 00:15:54.465 real 0m32.930s 00:15:54.465 user 0m34.998s 00:15:54.465 sys 0m26.848s 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.465 ************************************ 00:15:54.465 END TEST nvmf_vfio_user_fuzz 00:15:54.465 ************************************ 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.465 ************************************ 00:15:54.465 START TEST nvmf_auth_target 00:15:54.465 ************************************ 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:54.465 * Looking for test storage... 00:15:54.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:54.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.465 --rc genhtml_branch_coverage=1 00:15:54.465 --rc genhtml_function_coverage=1 00:15:54.465 --rc genhtml_legend=1 00:15:54.465 --rc geninfo_all_blocks=1 00:15:54.465 --rc geninfo_unexecuted_blocks=1 00:15:54.465 00:15:54.465 ' 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:54.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.465 --rc genhtml_branch_coverage=1 00:15:54.465 --rc genhtml_function_coverage=1 00:15:54.465 --rc genhtml_legend=1 00:15:54.465 --rc geninfo_all_blocks=1 00:15:54.465 --rc geninfo_unexecuted_blocks=1 00:15:54.465 00:15:54.465 ' 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:54.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.465 --rc genhtml_branch_coverage=1 00:15:54.465 --rc genhtml_function_coverage=1 00:15:54.465 --rc genhtml_legend=1 00:15:54.465 --rc geninfo_all_blocks=1 00:15:54.465 --rc geninfo_unexecuted_blocks=1 00:15:54.465 00:15:54.465 ' 00:15:54.465 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:54.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.465 --rc genhtml_branch_coverage=1 00:15:54.465 --rc genhtml_function_coverage=1 00:15:54.465 --rc genhtml_legend=1 00:15:54.465 --rc geninfo_all_blocks=1 00:15:54.465 --rc geninfo_unexecuted_blocks=1 00:15:54.465 00:15:54.465 ' 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:54.466 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:59.743 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:59.743 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:59.743 Found net devices under 0000:86:00.0: cvl_0_0 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:59.743 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:59.744 Found net devices under 0000:86:00.1: cvl_0_1 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:59.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:15:59.744 00:15:59.744 --- 10.0.0.2 ping statistics --- 00:15:59.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.744 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:59.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:15:59.744 00:15:59.744 --- 10.0.0.1 ping statistics --- 00:15:59.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.744 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=402370 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 402370 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 402370 ']' 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:59.744 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=402613 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=b3ea2514eac00a248c8ee954406e378b317afb764c767dda 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.vzL 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key b3ea2514eac00a248c8ee954406e378b317afb764c767dda 0 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 b3ea2514eac00a248c8ee954406e378b317afb764c767dda 0 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=b3ea2514eac00a248c8ee954406e378b317afb764c767dda 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.vzL 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.vzL 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.vzL 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=d1ab890f352ee00fea7bb649a41147f998532cb99da73d2dc800e926686f0ce1 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.P8g 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key d1ab890f352ee00fea7bb649a41147f998532cb99da73d2dc800e926686f0ce1 3 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 d1ab890f352ee00fea7bb649a41147f998532cb99da73d2dc800e926686f0ce1 3 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=d1ab890f352ee00fea7bb649a41147f998532cb99da73d2dc800e926686f0ce1 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.P8g 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.P8g 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.P8g 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=7d5bda8c847c8b526e222fb6d086fdfd 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.6OT 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 7d5bda8c847c8b526e222fb6d086fdfd 1 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 7d5bda8c847c8b526e222fb6d086fdfd 1 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=7d5bda8c847c8b526e222fb6d086fdfd 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.6OT 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.6OT 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.6OT 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:00.313 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:00.314 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:00.314 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=90ffe133a995cd8f74741151fae99f70d39c0947001ea13e 00:16:00.314 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:00.314 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Lic 00:16:00.314 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 90ffe133a995cd8f74741151fae99f70d39c0947001ea13e 2 00:16:00.314 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 90ffe133a995cd8f74741151fae99f70d39c0947001ea13e 2 00:16:00.314 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:00.314 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:00.314 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=90ffe133a995cd8f74741151fae99f70d39c0947001ea13e 00:16:00.314 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:00.314 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Lic 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Lic 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Lic 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=d606f6dc6f8925158b6118f0a9bcaca984da6896c31eb14b 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.VYx 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key d606f6dc6f8925158b6118f0a9bcaca984da6896c31eb14b 2 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 d606f6dc6f8925158b6118f0a9bcaca984da6896c31eb14b 2 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=d606f6dc6f8925158b6118f0a9bcaca984da6896c31eb14b 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.VYx 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.VYx 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.VYx 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=c4b2fa8248a7ee3aa8842534f3c30bb5 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.KbS 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key c4b2fa8248a7ee3aa8842534f3c30bb5 1 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 c4b2fa8248a7ee3aa8842534f3c30bb5 1 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=c4b2fa8248a7ee3aa8842534f3c30bb5 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.KbS 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.KbS 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.KbS 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=db47ecb7cb0ddfedd83adac08ff0bccf47681d41bff0375e739fe5ee99f3f3e9 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:00.573 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.lGn 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key db47ecb7cb0ddfedd83adac08ff0bccf47681d41bff0375e739fe5ee99f3f3e9 3 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 db47ecb7cb0ddfedd83adac08ff0bccf47681d41bff0375e739fe5ee99f3f3e9 3 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=db47ecb7cb0ddfedd83adac08ff0bccf47681d41bff0375e739fe5ee99f3f3e9 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.lGn 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.lGn 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.lGn 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 402370 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 402370 ']' 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:00.574 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.832 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:00.832 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:00.832 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 402613 /var/tmp/host.sock 00:16:00.832 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 402613 ']' 00:16:00.832 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:00.832 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:00.832 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:00.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:00.832 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:00.832 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.091 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:01.091 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:01.091 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:01.091 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.091 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.091 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.091 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:01.091 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vzL 00:16:01.091 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.091 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.091 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.091 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.vzL 00:16:01.091 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.vzL 00:16:01.350 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.P8g ]] 00:16:01.350 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.P8g 00:16:01.350 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.351 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.351 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.351 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.P8g 00:16:01.351 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.P8g 00:16:01.351 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:01.351 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.6OT 00:16:01.351 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.351 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.609 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.609 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.6OT 00:16:01.609 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.6OT 00:16:01.609 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Lic ]] 00:16:01.610 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Lic 00:16:01.610 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.610 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.610 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.610 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Lic 00:16:01.610 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Lic 00:16:01.868 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:01.868 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.VYx 00:16:01.868 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.868 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.868 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.868 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.VYx 00:16:01.868 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.VYx 00:16:02.127 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.KbS ]] 00:16:02.127 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KbS 00:16:02.127 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.128 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.128 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.128 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KbS 00:16:02.128 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KbS 00:16:02.386 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:02.386 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lGn 00:16:02.386 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.386 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.386 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.386 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.lGn 00:16:02.386 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.lGn 00:16:02.386 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:02.386 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:02.386 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.386 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.386 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:02.386 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:02.645 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:02.645 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.645 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:02.645 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:02.645 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:02.645 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.645 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.645 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.645 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.645 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.645 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.645 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.645 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.904 00:16:02.904 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.904 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.904 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.163 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.163 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.163 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.163 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.163 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.163 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.163 { 00:16:03.163 "cntlid": 1, 00:16:03.163 "qid": 0, 00:16:03.163 "state": "enabled", 00:16:03.163 "thread": "nvmf_tgt_poll_group_000", 00:16:03.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:03.163 "listen_address": { 00:16:03.163 "trtype": "TCP", 00:16:03.163 "adrfam": "IPv4", 00:16:03.163 "traddr": "10.0.0.2", 00:16:03.163 "trsvcid": "4420" 00:16:03.163 }, 00:16:03.163 "peer_address": { 00:16:03.163 "trtype": "TCP", 00:16:03.163 "adrfam": "IPv4", 00:16:03.163 "traddr": "10.0.0.1", 00:16:03.163 "trsvcid": "44348" 00:16:03.163 }, 00:16:03.163 "auth": { 00:16:03.163 "state": "completed", 00:16:03.163 "digest": "sha256", 00:16:03.163 "dhgroup": "null" 00:16:03.163 } 00:16:03.163 } 00:16:03.163 ]' 00:16:03.163 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.163 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.163 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.163 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:03.163 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.163 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.163 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.163 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.422 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:03.422 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:03.990 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.990 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:03.990 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.990 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.990 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.990 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.990 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:03.990 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:04.249 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:04.249 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.249 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:04.249 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:04.249 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:04.249 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.249 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.249 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.249 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.249 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.249 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.249 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.249 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.508 00:16:04.508 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.508 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.508 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.767 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.767 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.767 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.767 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.767 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.767 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.767 { 00:16:04.767 "cntlid": 3, 00:16:04.767 "qid": 0, 00:16:04.767 "state": "enabled", 00:16:04.767 "thread": "nvmf_tgt_poll_group_000", 00:16:04.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:04.767 "listen_address": { 00:16:04.767 "trtype": "TCP", 00:16:04.767 "adrfam": "IPv4", 00:16:04.767 "traddr": "10.0.0.2", 00:16:04.767 "trsvcid": "4420" 00:16:04.767 }, 00:16:04.767 "peer_address": { 00:16:04.767 "trtype": "TCP", 00:16:04.767 "adrfam": "IPv4", 00:16:04.767 "traddr": "10.0.0.1", 00:16:04.767 "trsvcid": "55354" 00:16:04.767 }, 00:16:04.767 "auth": { 00:16:04.767 "state": "completed", 00:16:04.767 "digest": "sha256", 00:16:04.767 "dhgroup": "null" 00:16:04.767 } 00:16:04.767 } 00:16:04.767 ]' 00:16:04.767 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.767 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.767 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.767 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:04.767 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.767 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.767 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.767 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.026 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:05.026 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:05.596 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.596 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:05.596 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.596 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.596 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.596 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.596 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:05.596 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:05.858 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:05.858 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.858 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:05.858 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:05.858 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:05.858 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.858 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.858 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.858 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.858 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.858 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.858 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.858 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.117 00:16:06.117 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.117 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.117 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.117 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.117 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.117 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.117 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.117 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.117 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.117 { 00:16:06.117 "cntlid": 5, 00:16:06.117 "qid": 0, 00:16:06.117 "state": "enabled", 00:16:06.117 "thread": "nvmf_tgt_poll_group_000", 00:16:06.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:06.117 "listen_address": { 00:16:06.117 "trtype": "TCP", 00:16:06.117 "adrfam": "IPv4", 00:16:06.117 "traddr": "10.0.0.2", 00:16:06.117 "trsvcid": "4420" 00:16:06.117 }, 00:16:06.117 "peer_address": { 00:16:06.117 "trtype": "TCP", 00:16:06.117 "adrfam": "IPv4", 00:16:06.117 "traddr": "10.0.0.1", 00:16:06.117 "trsvcid": "55370" 00:16:06.117 }, 00:16:06.117 "auth": { 00:16:06.117 "state": "completed", 00:16:06.117 "digest": "sha256", 00:16:06.117 "dhgroup": "null" 00:16:06.117 } 00:16:06.117 } 00:16:06.117 ]' 00:16:06.117 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.376 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.376 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.376 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:06.376 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.376 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.376 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.376 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.635 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:06.635 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:07.203 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.203 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:07.203 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.203 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.203 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.203 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.203 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.203 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.462 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:07.462 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.462 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:07.463 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:07.463 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:07.463 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.463 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:07.463 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.463 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.463 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.463 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:07.463 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.463 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.463 00:16:07.463 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.463 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.463 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.721 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.721 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.721 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.721 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.721 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.721 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.721 { 00:16:07.721 "cntlid": 7, 00:16:07.721 "qid": 0, 00:16:07.721 "state": "enabled", 00:16:07.721 "thread": "nvmf_tgt_poll_group_000", 00:16:07.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:07.721 "listen_address": { 00:16:07.721 "trtype": "TCP", 00:16:07.721 "adrfam": "IPv4", 00:16:07.721 "traddr": "10.0.0.2", 00:16:07.721 "trsvcid": "4420" 00:16:07.721 }, 00:16:07.721 "peer_address": { 00:16:07.721 "trtype": "TCP", 00:16:07.721 "adrfam": "IPv4", 00:16:07.721 "traddr": "10.0.0.1", 00:16:07.721 "trsvcid": "55402" 00:16:07.721 }, 00:16:07.721 "auth": { 00:16:07.721 "state": "completed", 00:16:07.721 "digest": "sha256", 00:16:07.721 "dhgroup": "null" 00:16:07.721 } 00:16:07.721 } 00:16:07.721 ]' 00:16:07.721 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.721 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.721 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.980 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:07.980 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.980 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.980 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.980 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.239 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:16:08.239 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:16:08.806 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.806 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:08.806 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.806 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.806 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.806 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.806 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.806 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:08.806 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:08.806 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:08.806 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.806 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:08.806 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:08.806 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:08.806 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.806 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.806 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.806 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.806 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.806 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.806 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.806 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.065 00:16:09.065 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.065 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.065 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.324 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.324 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.325 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.325 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.325 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.325 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.325 { 00:16:09.325 "cntlid": 9, 00:16:09.325 "qid": 0, 00:16:09.325 "state": "enabled", 00:16:09.325 "thread": "nvmf_tgt_poll_group_000", 00:16:09.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:09.325 "listen_address": { 00:16:09.325 "trtype": "TCP", 00:16:09.325 "adrfam": "IPv4", 00:16:09.325 "traddr": "10.0.0.2", 00:16:09.325 "trsvcid": "4420" 00:16:09.325 }, 00:16:09.325 "peer_address": { 00:16:09.325 "trtype": "TCP", 00:16:09.325 "adrfam": "IPv4", 00:16:09.325 "traddr": "10.0.0.1", 00:16:09.325 "trsvcid": "55426" 00:16:09.325 }, 00:16:09.325 "auth": { 00:16:09.325 "state": "completed", 00:16:09.325 "digest": "sha256", 00:16:09.325 "dhgroup": "ffdhe2048" 00:16:09.325 } 00:16:09.325 } 00:16:09.325 ]' 00:16:09.325 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.325 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.325 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.325 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:09.325 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.583 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.583 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.583 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.583 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:09.583 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:10.151 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.151 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:10.151 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.151 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.151 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.151 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.151 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:10.151 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:10.410 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:10.410 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.410 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.410 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:10.410 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:10.410 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.410 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.410 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.410 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.410 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.410 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.410 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.410 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.669 00:16:10.670 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.670 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.670 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.928 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.928 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.928 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.928 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.928 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.928 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.928 { 00:16:10.928 "cntlid": 11, 00:16:10.928 "qid": 0, 00:16:10.928 "state": "enabled", 00:16:10.928 "thread": "nvmf_tgt_poll_group_000", 00:16:10.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:10.928 "listen_address": { 00:16:10.928 "trtype": "TCP", 00:16:10.928 "adrfam": "IPv4", 00:16:10.928 "traddr": "10.0.0.2", 00:16:10.928 "trsvcid": "4420" 00:16:10.928 }, 00:16:10.928 "peer_address": { 00:16:10.928 "trtype": "TCP", 00:16:10.928 "adrfam": "IPv4", 00:16:10.928 "traddr": "10.0.0.1", 00:16:10.928 "trsvcid": "55456" 00:16:10.928 }, 00:16:10.928 "auth": { 00:16:10.928 "state": "completed", 00:16:10.928 "digest": "sha256", 00:16:10.928 "dhgroup": "ffdhe2048" 00:16:10.928 } 00:16:10.928 } 00:16:10.928 ]' 00:16:10.928 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.928 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.928 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.928 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:10.928 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.186 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.186 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.186 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.186 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:11.186 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:11.754 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.754 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:11.754 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.754 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.013 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.013 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.013 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:12.013 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:12.013 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:12.013 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.013 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.013 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:12.013 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:12.013 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.013 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.013 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.013 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.013 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.013 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.013 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.013 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.272 00:16:12.272 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.273 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.273 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.531 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.531 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.531 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.531 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.531 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.531 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.531 { 00:16:12.531 "cntlid": 13, 00:16:12.531 "qid": 0, 00:16:12.531 "state": "enabled", 00:16:12.531 "thread": "nvmf_tgt_poll_group_000", 00:16:12.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:12.531 "listen_address": { 00:16:12.531 "trtype": "TCP", 00:16:12.531 "adrfam": "IPv4", 00:16:12.531 "traddr": "10.0.0.2", 00:16:12.531 "trsvcid": "4420" 00:16:12.531 }, 00:16:12.531 "peer_address": { 00:16:12.531 "trtype": "TCP", 00:16:12.531 "adrfam": "IPv4", 00:16:12.531 "traddr": "10.0.0.1", 00:16:12.531 "trsvcid": "55474" 00:16:12.531 }, 00:16:12.531 "auth": { 00:16:12.531 "state": "completed", 00:16:12.531 "digest": "sha256", 00:16:12.531 "dhgroup": "ffdhe2048" 00:16:12.531 } 00:16:12.531 } 00:16:12.531 ]' 00:16:12.531 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.531 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.531 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.531 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:12.531 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.790 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.790 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.790 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.790 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:12.790 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:13.357 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.357 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:13.357 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.357 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.616 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.616 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.616 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:13.616 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:13.616 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:13.616 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.616 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:13.616 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:13.616 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.616 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.616 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:13.616 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.616 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.616 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.616 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.616 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.616 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.874 00:16:13.874 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.874 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.874 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.133 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.133 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.133 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.133 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.133 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.133 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.133 { 00:16:14.133 "cntlid": 15, 00:16:14.133 "qid": 0, 00:16:14.133 "state": "enabled", 00:16:14.133 "thread": "nvmf_tgt_poll_group_000", 00:16:14.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:14.133 "listen_address": { 00:16:14.133 "trtype": "TCP", 00:16:14.133 "adrfam": "IPv4", 00:16:14.133 "traddr": "10.0.0.2", 00:16:14.133 "trsvcid": "4420" 00:16:14.133 }, 00:16:14.133 "peer_address": { 00:16:14.133 "trtype": "TCP", 00:16:14.133 "adrfam": "IPv4", 00:16:14.133 "traddr": "10.0.0.1", 00:16:14.133 "trsvcid": "55506" 00:16:14.133 }, 00:16:14.133 "auth": { 00:16:14.133 "state": "completed", 00:16:14.133 "digest": "sha256", 00:16:14.133 "dhgroup": "ffdhe2048" 00:16:14.133 } 00:16:14.133 } 00:16:14.133 ]' 00:16:14.133 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.133 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.133 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.391 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:14.391 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.391 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.391 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.391 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.391 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:16:14.391 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:16:14.958 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.958 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:14.958 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.958 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.958 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.958 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.958 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.958 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:14.958 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:15.219 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:15.219 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.219 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.219 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:15.219 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:15.219 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.219 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.219 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.219 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.219 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.219 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.219 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.219 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.477 00:16:15.477 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.477 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.477 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.735 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.735 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.735 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.735 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.735 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.735 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.735 { 00:16:15.735 "cntlid": 17, 00:16:15.735 "qid": 0, 00:16:15.735 "state": "enabled", 00:16:15.735 "thread": "nvmf_tgt_poll_group_000", 00:16:15.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:15.735 "listen_address": { 00:16:15.735 "trtype": "TCP", 00:16:15.735 "adrfam": "IPv4", 00:16:15.735 "traddr": "10.0.0.2", 00:16:15.735 "trsvcid": "4420" 00:16:15.735 }, 00:16:15.735 "peer_address": { 00:16:15.735 "trtype": "TCP", 00:16:15.735 "adrfam": "IPv4", 00:16:15.735 "traddr": "10.0.0.1", 00:16:15.735 "trsvcid": "40958" 00:16:15.735 }, 00:16:15.735 "auth": { 00:16:15.735 "state": "completed", 00:16:15.735 "digest": "sha256", 00:16:15.735 "dhgroup": "ffdhe3072" 00:16:15.735 } 00:16:15.735 } 00:16:15.735 ]' 00:16:15.735 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.735 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.735 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.735 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:15.735 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.994 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.994 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.994 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.994 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:15.994 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:16.562 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.562 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:16.562 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.562 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.562 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.562 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.562 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:16.562 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:16.820 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:16.820 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.820 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:16.820 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:16.820 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:16.820 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.820 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.820 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.820 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.820 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.820 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.820 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.820 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.078 00:16:17.078 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.078 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.078 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.337 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.337 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.337 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.337 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.337 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.337 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.337 { 00:16:17.337 "cntlid": 19, 00:16:17.337 "qid": 0, 00:16:17.337 "state": "enabled", 00:16:17.337 "thread": "nvmf_tgt_poll_group_000", 00:16:17.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:17.337 "listen_address": { 00:16:17.337 "trtype": "TCP", 00:16:17.337 "adrfam": "IPv4", 00:16:17.337 "traddr": "10.0.0.2", 00:16:17.337 "trsvcid": "4420" 00:16:17.337 }, 00:16:17.337 "peer_address": { 00:16:17.337 "trtype": "TCP", 00:16:17.337 "adrfam": "IPv4", 00:16:17.337 "traddr": "10.0.0.1", 00:16:17.337 "trsvcid": "40982" 00:16:17.337 }, 00:16:17.337 "auth": { 00:16:17.337 "state": "completed", 00:16:17.337 "digest": "sha256", 00:16:17.337 "dhgroup": "ffdhe3072" 00:16:17.337 } 00:16:17.337 } 00:16:17.337 ]' 00:16:17.337 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.337 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.337 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.337 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:17.337 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.596 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.596 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.596 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.596 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:17.596 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:18.163 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.163 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:18.163 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.163 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.163 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.163 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.163 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:18.163 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:18.422 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:18.422 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.422 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.422 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:18.422 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:18.422 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.422 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.422 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.422 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.422 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.422 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.422 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.422 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.681 00:16:18.681 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.681 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.681 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.939 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.939 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.939 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.939 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.939 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.939 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.939 { 00:16:18.939 "cntlid": 21, 00:16:18.939 "qid": 0, 00:16:18.939 "state": "enabled", 00:16:18.939 "thread": "nvmf_tgt_poll_group_000", 00:16:18.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:18.939 "listen_address": { 00:16:18.939 "trtype": "TCP", 00:16:18.939 "adrfam": "IPv4", 00:16:18.939 "traddr": "10.0.0.2", 00:16:18.939 "trsvcid": "4420" 00:16:18.939 }, 00:16:18.939 "peer_address": { 00:16:18.939 "trtype": "TCP", 00:16:18.939 "adrfam": "IPv4", 00:16:18.939 "traddr": "10.0.0.1", 00:16:18.939 "trsvcid": "41010" 00:16:18.939 }, 00:16:18.939 "auth": { 00:16:18.939 "state": "completed", 00:16:18.939 "digest": "sha256", 00:16:18.939 "dhgroup": "ffdhe3072" 00:16:18.939 } 00:16:18.939 } 00:16:18.939 ]' 00:16:18.939 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.939 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.939 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.939 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:18.939 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.198 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.198 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.198 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.198 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:19.198 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:19.766 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.766 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:19.766 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.766 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.766 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.766 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.766 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:19.766 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:20.025 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:20.025 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.025 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.025 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:20.025 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:20.025 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.025 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:20.025 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.025 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.025 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.025 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:20.025 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.025 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.284 00:16:20.284 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.284 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.284 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.542 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.542 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.542 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.542 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.542 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.542 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.542 { 00:16:20.542 "cntlid": 23, 00:16:20.542 "qid": 0, 00:16:20.542 "state": "enabled", 00:16:20.542 "thread": "nvmf_tgt_poll_group_000", 00:16:20.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:20.542 "listen_address": { 00:16:20.542 "trtype": "TCP", 00:16:20.542 "adrfam": "IPv4", 00:16:20.542 "traddr": "10.0.0.2", 00:16:20.542 "trsvcid": "4420" 00:16:20.542 }, 00:16:20.542 "peer_address": { 00:16:20.542 "trtype": "TCP", 00:16:20.542 "adrfam": "IPv4", 00:16:20.542 "traddr": "10.0.0.1", 00:16:20.542 "trsvcid": "41042" 00:16:20.542 }, 00:16:20.542 "auth": { 00:16:20.542 "state": "completed", 00:16:20.542 "digest": "sha256", 00:16:20.542 "dhgroup": "ffdhe3072" 00:16:20.542 } 00:16:20.542 } 00:16:20.542 ]' 00:16:20.542 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.542 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.542 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.542 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:20.542 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.542 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.542 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.542 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.801 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:16:20.801 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:16:21.368 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.368 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:21.368 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.368 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.368 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.368 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.368 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.368 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:21.368 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:21.627 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:21.627 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.627 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.627 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:21.627 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:21.627 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.627 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.627 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.627 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.627 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.627 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.627 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.627 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.886 00:16:21.886 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.886 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.886 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.145 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.145 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.145 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.145 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.145 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.145 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.145 { 00:16:22.145 "cntlid": 25, 00:16:22.145 "qid": 0, 00:16:22.145 "state": "enabled", 00:16:22.145 "thread": "nvmf_tgt_poll_group_000", 00:16:22.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:22.145 "listen_address": { 00:16:22.145 "trtype": "TCP", 00:16:22.146 "adrfam": "IPv4", 00:16:22.146 "traddr": "10.0.0.2", 00:16:22.146 "trsvcid": "4420" 00:16:22.146 }, 00:16:22.146 "peer_address": { 00:16:22.146 "trtype": "TCP", 00:16:22.146 "adrfam": "IPv4", 00:16:22.146 "traddr": "10.0.0.1", 00:16:22.146 "trsvcid": "41074" 00:16:22.146 }, 00:16:22.146 "auth": { 00:16:22.146 "state": "completed", 00:16:22.146 "digest": "sha256", 00:16:22.146 "dhgroup": "ffdhe4096" 00:16:22.146 } 00:16:22.146 } 00:16:22.146 ]' 00:16:22.146 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.146 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.146 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.146 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:22.146 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.405 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.405 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.405 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.405 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:22.405 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:22.973 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.973 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:22.973 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.973 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.973 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.973 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.973 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:22.973 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:23.232 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:23.232 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.232 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.232 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:23.232 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:23.232 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.232 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.232 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.232 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.232 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.232 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.232 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.232 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.491 00:16:23.491 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.491 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.491 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.750 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.750 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.750 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.750 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.750 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.750 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.750 { 00:16:23.750 "cntlid": 27, 00:16:23.750 "qid": 0, 00:16:23.750 "state": "enabled", 00:16:23.750 "thread": "nvmf_tgt_poll_group_000", 00:16:23.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:23.750 "listen_address": { 00:16:23.750 "trtype": "TCP", 00:16:23.750 "adrfam": "IPv4", 00:16:23.750 "traddr": "10.0.0.2", 00:16:23.750 "trsvcid": "4420" 00:16:23.750 }, 00:16:23.750 "peer_address": { 00:16:23.750 "trtype": "TCP", 00:16:23.750 "adrfam": "IPv4", 00:16:23.750 "traddr": "10.0.0.1", 00:16:23.750 "trsvcid": "41088" 00:16:23.750 }, 00:16:23.750 "auth": { 00:16:23.750 "state": "completed", 00:16:23.750 "digest": "sha256", 00:16:23.750 "dhgroup": "ffdhe4096" 00:16:23.750 } 00:16:23.750 } 00:16:23.750 ]' 00:16:23.750 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.750 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.750 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.750 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:23.750 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.750 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.751 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.751 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.009 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:24.009 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:24.576 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.576 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:24.576 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.576 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.576 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.576 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.576 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:24.576 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:24.835 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:24.835 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.835 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.835 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:24.835 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:24.835 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.835 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.835 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.835 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.835 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.835 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.835 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.835 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.119 00:16:25.119 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.119 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.119 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.378 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.378 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.378 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.378 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.378 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.378 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.378 { 00:16:25.378 "cntlid": 29, 00:16:25.378 "qid": 0, 00:16:25.378 "state": "enabled", 00:16:25.378 "thread": "nvmf_tgt_poll_group_000", 00:16:25.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:25.378 "listen_address": { 00:16:25.378 "trtype": "TCP", 00:16:25.378 "adrfam": "IPv4", 00:16:25.378 "traddr": "10.0.0.2", 00:16:25.378 "trsvcid": "4420" 00:16:25.378 }, 00:16:25.378 "peer_address": { 00:16:25.378 "trtype": "TCP", 00:16:25.378 "adrfam": "IPv4", 00:16:25.378 "traddr": "10.0.0.1", 00:16:25.378 "trsvcid": "56708" 00:16:25.378 }, 00:16:25.378 "auth": { 00:16:25.378 "state": "completed", 00:16:25.378 "digest": "sha256", 00:16:25.378 "dhgroup": "ffdhe4096" 00:16:25.378 } 00:16:25.378 } 00:16:25.378 ]' 00:16:25.378 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.378 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.378 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.378 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:25.378 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.378 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.378 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.378 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.637 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:25.637 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:26.206 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.206 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:26.206 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.206 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.206 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.206 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.206 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:26.206 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:26.464 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:26.464 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.464 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.464 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:26.464 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:26.464 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.464 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:26.464 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.464 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.464 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.464 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:26.464 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.464 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.724 00:16:26.724 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.724 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.724 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.984 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.984 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.984 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.985 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.985 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.985 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.985 { 00:16:26.985 "cntlid": 31, 00:16:26.985 "qid": 0, 00:16:26.985 "state": "enabled", 00:16:26.985 "thread": "nvmf_tgt_poll_group_000", 00:16:26.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:26.985 "listen_address": { 00:16:26.985 "trtype": "TCP", 00:16:26.985 "adrfam": "IPv4", 00:16:26.985 "traddr": "10.0.0.2", 00:16:26.985 "trsvcid": "4420" 00:16:26.985 }, 00:16:26.985 "peer_address": { 00:16:26.985 "trtype": "TCP", 00:16:26.985 "adrfam": "IPv4", 00:16:26.985 "traddr": "10.0.0.1", 00:16:26.985 "trsvcid": "56736" 00:16:26.985 }, 00:16:26.985 "auth": { 00:16:26.985 "state": "completed", 00:16:26.985 "digest": "sha256", 00:16:26.985 "dhgroup": "ffdhe4096" 00:16:26.985 } 00:16:26.985 } 00:16:26.985 ]' 00:16:26.985 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.985 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.985 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.985 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:26.985 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.985 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.985 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.985 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.244 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:16:27.244 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:16:27.813 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.813 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:27.813 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.813 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.813 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.813 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.813 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.813 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:27.813 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:28.073 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:28.073 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.073 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.073 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:28.073 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:28.073 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.073 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.073 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.073 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.073 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.073 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.073 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.073 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.332 00:16:28.332 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.332 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.332 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.590 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.590 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.590 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.590 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.590 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.590 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.590 { 00:16:28.590 "cntlid": 33, 00:16:28.590 "qid": 0, 00:16:28.590 "state": "enabled", 00:16:28.590 "thread": "nvmf_tgt_poll_group_000", 00:16:28.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:28.590 "listen_address": { 00:16:28.590 "trtype": "TCP", 00:16:28.590 "adrfam": "IPv4", 00:16:28.590 "traddr": "10.0.0.2", 00:16:28.590 "trsvcid": "4420" 00:16:28.590 }, 00:16:28.590 "peer_address": { 00:16:28.590 "trtype": "TCP", 00:16:28.590 "adrfam": "IPv4", 00:16:28.590 "traddr": "10.0.0.1", 00:16:28.590 "trsvcid": "56762" 00:16:28.590 }, 00:16:28.590 "auth": { 00:16:28.590 "state": "completed", 00:16:28.590 "digest": "sha256", 00:16:28.590 "dhgroup": "ffdhe6144" 00:16:28.590 } 00:16:28.590 } 00:16:28.590 ]' 00:16:28.590 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.590 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.590 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.590 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:28.590 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.590 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.590 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.590 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.849 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:28.849 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:29.418 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.418 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:29.418 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.418 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.418 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.418 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.418 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:29.418 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:29.678 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:29.678 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.678 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.678 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:29.678 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:29.678 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.678 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.678 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.678 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.678 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.678 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.678 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.678 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.937 00:16:29.937 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.937 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.937 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.196 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.196 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.196 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.196 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.196 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.196 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.196 { 00:16:30.196 "cntlid": 35, 00:16:30.196 "qid": 0, 00:16:30.196 "state": "enabled", 00:16:30.196 "thread": "nvmf_tgt_poll_group_000", 00:16:30.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:30.196 "listen_address": { 00:16:30.196 "trtype": "TCP", 00:16:30.196 "adrfam": "IPv4", 00:16:30.196 "traddr": "10.0.0.2", 00:16:30.196 "trsvcid": "4420" 00:16:30.196 }, 00:16:30.196 "peer_address": { 00:16:30.196 "trtype": "TCP", 00:16:30.196 "adrfam": "IPv4", 00:16:30.196 "traddr": "10.0.0.1", 00:16:30.196 "trsvcid": "56788" 00:16:30.196 }, 00:16:30.196 "auth": { 00:16:30.196 "state": "completed", 00:16:30.196 "digest": "sha256", 00:16:30.196 "dhgroup": "ffdhe6144" 00:16:30.196 } 00:16:30.196 } 00:16:30.196 ]' 00:16:30.196 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.196 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.196 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.456 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:30.456 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.456 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.456 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.456 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.456 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:30.456 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:31.025 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.284 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.852 00:16:31.852 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.852 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.852 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.852 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.852 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.853 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.853 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.853 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.853 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.853 { 00:16:31.853 "cntlid": 37, 00:16:31.853 "qid": 0, 00:16:31.853 "state": "enabled", 00:16:31.853 "thread": "nvmf_tgt_poll_group_000", 00:16:31.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:31.853 "listen_address": { 00:16:31.853 "trtype": "TCP", 00:16:31.853 "adrfam": "IPv4", 00:16:31.853 "traddr": "10.0.0.2", 00:16:31.853 "trsvcid": "4420" 00:16:31.853 }, 00:16:31.853 "peer_address": { 00:16:31.853 "trtype": "TCP", 00:16:31.853 "adrfam": "IPv4", 00:16:31.853 "traddr": "10.0.0.1", 00:16:31.853 "trsvcid": "56812" 00:16:31.853 }, 00:16:31.853 "auth": { 00:16:31.853 "state": "completed", 00:16:31.853 "digest": "sha256", 00:16:31.853 "dhgroup": "ffdhe6144" 00:16:31.853 } 00:16:31.853 } 00:16:31.853 ]' 00:16:31.853 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.853 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.853 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.111 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:32.111 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.111 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.111 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.111 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.371 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:32.371 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:32.939 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.939 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.507 00:16:33.507 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.507 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.507 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.507 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.507 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.507 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.507 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.507 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.507 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.507 { 00:16:33.507 "cntlid": 39, 00:16:33.507 "qid": 0, 00:16:33.507 "state": "enabled", 00:16:33.507 "thread": "nvmf_tgt_poll_group_000", 00:16:33.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:33.507 "listen_address": { 00:16:33.507 "trtype": "TCP", 00:16:33.507 "adrfam": "IPv4", 00:16:33.507 "traddr": "10.0.0.2", 00:16:33.507 "trsvcid": "4420" 00:16:33.507 }, 00:16:33.507 "peer_address": { 00:16:33.507 "trtype": "TCP", 00:16:33.507 "adrfam": "IPv4", 00:16:33.507 "traddr": "10.0.0.1", 00:16:33.507 "trsvcid": "56848" 00:16:33.507 }, 00:16:33.507 "auth": { 00:16:33.507 "state": "completed", 00:16:33.507 "digest": "sha256", 00:16:33.507 "dhgroup": "ffdhe6144" 00:16:33.507 } 00:16:33.507 } 00:16:33.507 ]' 00:16:33.507 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.766 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.766 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.766 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:33.766 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.766 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.766 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.766 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.025 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:16:34.025 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.593 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.161 00:16:35.161 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.161 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.161 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.419 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.420 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.420 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.420 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.420 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.420 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.420 { 00:16:35.420 "cntlid": 41, 00:16:35.420 "qid": 0, 00:16:35.420 "state": "enabled", 00:16:35.420 "thread": "nvmf_tgt_poll_group_000", 00:16:35.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:35.420 "listen_address": { 00:16:35.420 "trtype": "TCP", 00:16:35.420 "adrfam": "IPv4", 00:16:35.420 "traddr": "10.0.0.2", 00:16:35.420 "trsvcid": "4420" 00:16:35.420 }, 00:16:35.420 "peer_address": { 00:16:35.420 "trtype": "TCP", 00:16:35.420 "adrfam": "IPv4", 00:16:35.420 "traddr": "10.0.0.1", 00:16:35.420 "trsvcid": "51408" 00:16:35.420 }, 00:16:35.420 "auth": { 00:16:35.420 "state": "completed", 00:16:35.420 "digest": "sha256", 00:16:35.420 "dhgroup": "ffdhe8192" 00:16:35.420 } 00:16:35.420 } 00:16:35.420 ]' 00:16:35.420 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.420 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.420 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.420 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:35.420 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.420 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.420 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.420 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.678 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:35.679 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:36.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:36.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:36.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:36.509 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:36.509 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.509 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.509 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:36.509 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:36.509 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.509 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.509 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.509 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.509 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.509 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.509 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.509 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.077 00:16:37.077 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.077 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.078 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.078 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.078 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.078 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.078 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.078 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.078 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.078 { 00:16:37.078 "cntlid": 43, 00:16:37.078 "qid": 0, 00:16:37.078 "state": "enabled", 00:16:37.078 "thread": "nvmf_tgt_poll_group_000", 00:16:37.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:37.078 "listen_address": { 00:16:37.078 "trtype": "TCP", 00:16:37.078 "adrfam": "IPv4", 00:16:37.078 "traddr": "10.0.0.2", 00:16:37.078 "trsvcid": "4420" 00:16:37.078 }, 00:16:37.078 "peer_address": { 00:16:37.078 "trtype": "TCP", 00:16:37.078 "adrfam": "IPv4", 00:16:37.078 "traddr": "10.0.0.1", 00:16:37.078 "trsvcid": "51416" 00:16:37.078 }, 00:16:37.078 "auth": { 00:16:37.078 "state": "completed", 00:16:37.078 "digest": "sha256", 00:16:37.078 "dhgroup": "ffdhe8192" 00:16:37.078 } 00:16:37.078 } 00:16:37.078 ]' 00:16:37.078 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.337 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.337 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.337 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:37.337 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.337 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.337 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.337 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.595 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:37.595 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:38.164 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.164 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:38.164 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.164 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.164 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.164 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.164 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:38.164 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:38.164 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:38.164 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.164 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.164 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:38.164 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.164 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.164 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.164 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.164 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.422 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.422 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.422 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.422 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.681 00:16:38.681 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.681 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.681 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.939 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.939 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.939 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.939 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.939 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.939 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.939 { 00:16:38.939 "cntlid": 45, 00:16:38.939 "qid": 0, 00:16:38.939 "state": "enabled", 00:16:38.939 "thread": "nvmf_tgt_poll_group_000", 00:16:38.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:38.939 "listen_address": { 00:16:38.939 "trtype": "TCP", 00:16:38.939 "adrfam": "IPv4", 00:16:38.939 "traddr": "10.0.0.2", 00:16:38.939 "trsvcid": "4420" 00:16:38.939 }, 00:16:38.939 "peer_address": { 00:16:38.939 "trtype": "TCP", 00:16:38.939 "adrfam": "IPv4", 00:16:38.939 "traddr": "10.0.0.1", 00:16:38.939 "trsvcid": "51444" 00:16:38.939 }, 00:16:38.939 "auth": { 00:16:38.939 "state": "completed", 00:16:38.939 "digest": "sha256", 00:16:38.939 "dhgroup": "ffdhe8192" 00:16:38.939 } 00:16:38.939 } 00:16:38.939 ]' 00:16:38.939 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.939 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.939 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.197 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:39.197 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.197 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.197 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.197 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.455 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:39.455 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.023 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:40.024 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.024 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.592 00:16:40.592 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.592 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.592 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.851 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.851 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.851 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.851 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.851 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.851 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.851 { 00:16:40.851 "cntlid": 47, 00:16:40.851 "qid": 0, 00:16:40.851 "state": "enabled", 00:16:40.851 "thread": "nvmf_tgt_poll_group_000", 00:16:40.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:40.851 "listen_address": { 00:16:40.851 "trtype": "TCP", 00:16:40.851 "adrfam": "IPv4", 00:16:40.851 "traddr": "10.0.0.2", 00:16:40.851 "trsvcid": "4420" 00:16:40.851 }, 00:16:40.851 "peer_address": { 00:16:40.851 "trtype": "TCP", 00:16:40.851 "adrfam": "IPv4", 00:16:40.851 "traddr": "10.0.0.1", 00:16:40.851 "trsvcid": "51466" 00:16:40.851 }, 00:16:40.851 "auth": { 00:16:40.851 "state": "completed", 00:16:40.851 "digest": "sha256", 00:16:40.851 "dhgroup": "ffdhe8192" 00:16:40.851 } 00:16:40.851 } 00:16:40.851 ]' 00:16:40.851 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.851 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.851 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.851 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:40.851 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.851 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.851 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.851 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.167 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:16:41.167 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:16:41.736 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.736 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:41.736 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.736 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.736 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.736 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:41.736 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.736 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.736 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:41.736 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:41.995 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:41.995 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.995 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:41.995 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:41.995 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.995 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.995 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.995 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.995 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.995 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.995 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.995 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.995 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.254 00:16:42.254 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.254 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.254 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.255 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.255 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.255 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.255 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.514 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.514 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.514 { 00:16:42.514 "cntlid": 49, 00:16:42.514 "qid": 0, 00:16:42.514 "state": "enabled", 00:16:42.514 "thread": "nvmf_tgt_poll_group_000", 00:16:42.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:42.514 "listen_address": { 00:16:42.514 "trtype": "TCP", 00:16:42.514 "adrfam": "IPv4", 00:16:42.514 "traddr": "10.0.0.2", 00:16:42.514 "trsvcid": "4420" 00:16:42.514 }, 00:16:42.514 "peer_address": { 00:16:42.514 "trtype": "TCP", 00:16:42.514 "adrfam": "IPv4", 00:16:42.514 "traddr": "10.0.0.1", 00:16:42.514 "trsvcid": "51494" 00:16:42.514 }, 00:16:42.514 "auth": { 00:16:42.514 "state": "completed", 00:16:42.514 "digest": "sha384", 00:16:42.514 "dhgroup": "null" 00:16:42.514 } 00:16:42.514 } 00:16:42.514 ]' 00:16:42.514 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.514 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.514 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.514 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:42.514 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.514 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.514 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.514 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.773 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:42.773 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:43.342 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.342 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:43.342 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.342 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.342 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.342 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.342 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:43.342 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:43.601 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:43.601 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.602 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:43.602 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:43.602 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.602 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.602 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.602 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.602 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.602 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.602 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.602 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.602 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.602 00:16:43.861 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.861 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.861 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.861 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.861 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.861 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.861 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.861 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.861 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.861 { 00:16:43.861 "cntlid": 51, 00:16:43.861 "qid": 0, 00:16:43.861 "state": "enabled", 00:16:43.861 "thread": "nvmf_tgt_poll_group_000", 00:16:43.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:43.861 "listen_address": { 00:16:43.861 "trtype": "TCP", 00:16:43.861 "adrfam": "IPv4", 00:16:43.861 "traddr": "10.0.0.2", 00:16:43.861 "trsvcid": "4420" 00:16:43.861 }, 00:16:43.861 "peer_address": { 00:16:43.861 "trtype": "TCP", 00:16:43.861 "adrfam": "IPv4", 00:16:43.861 "traddr": "10.0.0.1", 00:16:43.861 "trsvcid": "51518" 00:16:43.861 }, 00:16:43.861 "auth": { 00:16:43.861 "state": "completed", 00:16:43.861 "digest": "sha384", 00:16:43.861 "dhgroup": "null" 00:16:43.861 } 00:16:43.861 } 00:16:43.861 ]' 00:16:43.861 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.861 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.120 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.120 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:44.120 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.120 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.120 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.120 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.380 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:44.380 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:44.949 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.949 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:44.949 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.949 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.949 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.949 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.949 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:44.950 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:44.950 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:44.950 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.950 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:44.950 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:44.950 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.950 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.950 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.950 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.950 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.950 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.950 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.950 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.950 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.208 00:16:45.208 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.208 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.208 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.466 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.466 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.466 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.466 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.466 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.466 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.466 { 00:16:45.466 "cntlid": 53, 00:16:45.466 "qid": 0, 00:16:45.466 "state": "enabled", 00:16:45.466 "thread": "nvmf_tgt_poll_group_000", 00:16:45.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:45.466 "listen_address": { 00:16:45.466 "trtype": "TCP", 00:16:45.466 "adrfam": "IPv4", 00:16:45.466 "traddr": "10.0.0.2", 00:16:45.466 "trsvcid": "4420" 00:16:45.466 }, 00:16:45.466 "peer_address": { 00:16:45.466 "trtype": "TCP", 00:16:45.466 "adrfam": "IPv4", 00:16:45.466 "traddr": "10.0.0.1", 00:16:45.466 "trsvcid": "45346" 00:16:45.466 }, 00:16:45.466 "auth": { 00:16:45.466 "state": "completed", 00:16:45.466 "digest": "sha384", 00:16:45.466 "dhgroup": "null" 00:16:45.466 } 00:16:45.466 } 00:16:45.466 ]' 00:16:45.466 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.466 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.466 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.725 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:45.725 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.725 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.725 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.725 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.725 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:45.725 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:46.293 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.293 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:46.293 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.293 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.293 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.293 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.293 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:46.293 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:46.552 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:46.552 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.552 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.553 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:46.553 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.553 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.553 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:46.553 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.553 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.553 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.553 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.553 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.553 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.812 00:16:46.812 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.812 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.812 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.071 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.071 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.071 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.071 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.071 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.071 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.071 { 00:16:47.071 "cntlid": 55, 00:16:47.071 "qid": 0, 00:16:47.071 "state": "enabled", 00:16:47.071 "thread": "nvmf_tgt_poll_group_000", 00:16:47.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:47.071 "listen_address": { 00:16:47.071 "trtype": "TCP", 00:16:47.071 "adrfam": "IPv4", 00:16:47.071 "traddr": "10.0.0.2", 00:16:47.071 "trsvcid": "4420" 00:16:47.071 }, 00:16:47.071 "peer_address": { 00:16:47.071 "trtype": "TCP", 00:16:47.071 "adrfam": "IPv4", 00:16:47.071 "traddr": "10.0.0.1", 00:16:47.071 "trsvcid": "45368" 00:16:47.071 }, 00:16:47.071 "auth": { 00:16:47.071 "state": "completed", 00:16:47.071 "digest": "sha384", 00:16:47.071 "dhgroup": "null" 00:16:47.071 } 00:16:47.071 } 00:16:47.071 ]' 00:16:47.071 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.071 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.071 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.071 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:47.071 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.330 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.330 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.330 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.331 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:16:47.331 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:16:47.899 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.899 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:47.900 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.900 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.900 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.900 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.900 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.900 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:47.900 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:48.159 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:48.159 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.159 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.159 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:48.159 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.159 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.159 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.159 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.159 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.159 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.159 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.159 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.159 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.418 00:16:48.418 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.418 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.418 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.678 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.678 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.678 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.678 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.678 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.678 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.678 { 00:16:48.678 "cntlid": 57, 00:16:48.678 "qid": 0, 00:16:48.678 "state": "enabled", 00:16:48.678 "thread": "nvmf_tgt_poll_group_000", 00:16:48.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:48.678 "listen_address": { 00:16:48.678 "trtype": "TCP", 00:16:48.678 "adrfam": "IPv4", 00:16:48.678 "traddr": "10.0.0.2", 00:16:48.678 "trsvcid": "4420" 00:16:48.678 }, 00:16:48.678 "peer_address": { 00:16:48.678 "trtype": "TCP", 00:16:48.678 "adrfam": "IPv4", 00:16:48.678 "traddr": "10.0.0.1", 00:16:48.678 "trsvcid": "45398" 00:16:48.678 }, 00:16:48.678 "auth": { 00:16:48.678 "state": "completed", 00:16:48.678 "digest": "sha384", 00:16:48.678 "dhgroup": "ffdhe2048" 00:16:48.678 } 00:16:48.678 } 00:16:48.678 ]' 00:16:48.678 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.678 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.678 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.678 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:48.678 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.678 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.678 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.678 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.938 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:48.938 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:49.506 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.506 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.506 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.506 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.506 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.506 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.506 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:49.506 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:49.765 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:49.765 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.765 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:49.765 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:49.765 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:49.766 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.766 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.766 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.766 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.766 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.766 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.766 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.766 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.025 00:16:50.025 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.025 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.025 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.285 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.285 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.285 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.285 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.285 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.285 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.285 { 00:16:50.285 "cntlid": 59, 00:16:50.285 "qid": 0, 00:16:50.285 "state": "enabled", 00:16:50.285 "thread": "nvmf_tgt_poll_group_000", 00:16:50.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:50.285 "listen_address": { 00:16:50.285 "trtype": "TCP", 00:16:50.285 "adrfam": "IPv4", 00:16:50.285 "traddr": "10.0.0.2", 00:16:50.285 "trsvcid": "4420" 00:16:50.285 }, 00:16:50.285 "peer_address": { 00:16:50.285 "trtype": "TCP", 00:16:50.285 "adrfam": "IPv4", 00:16:50.285 "traddr": "10.0.0.1", 00:16:50.285 "trsvcid": "45432" 00:16:50.285 }, 00:16:50.285 "auth": { 00:16:50.285 "state": "completed", 00:16:50.285 "digest": "sha384", 00:16:50.285 "dhgroup": "ffdhe2048" 00:16:50.285 } 00:16:50.285 } 00:16:50.285 ]' 00:16:50.285 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.285 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.285 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.285 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:50.285 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.285 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.285 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.285 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.545 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:50.545 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:51.114 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.114 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:51.114 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.114 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.114 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.114 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.114 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:51.114 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:51.373 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:51.373 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.373 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:51.373 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:51.373 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:51.373 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.373 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.373 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.373 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.373 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.373 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.373 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.373 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.633 00:16:51.633 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.633 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.633 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.633 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.633 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.633 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.633 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.893 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.893 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.893 { 00:16:51.893 "cntlid": 61, 00:16:51.893 "qid": 0, 00:16:51.893 "state": "enabled", 00:16:51.893 "thread": "nvmf_tgt_poll_group_000", 00:16:51.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:51.893 "listen_address": { 00:16:51.893 "trtype": "TCP", 00:16:51.893 "adrfam": "IPv4", 00:16:51.893 "traddr": "10.0.0.2", 00:16:51.893 "trsvcid": "4420" 00:16:51.893 }, 00:16:51.893 "peer_address": { 00:16:51.893 "trtype": "TCP", 00:16:51.893 "adrfam": "IPv4", 00:16:51.893 "traddr": "10.0.0.1", 00:16:51.893 "trsvcid": "45466" 00:16:51.893 }, 00:16:51.893 "auth": { 00:16:51.893 "state": "completed", 00:16:51.893 "digest": "sha384", 00:16:51.893 "dhgroup": "ffdhe2048" 00:16:51.893 } 00:16:51.893 } 00:16:51.893 ]' 00:16:51.893 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.893 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.893 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.893 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:51.893 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.893 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.893 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.893 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.152 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:52.152 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:52.721 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.721 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:52.721 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.721 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.721 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.721 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.721 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.721 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.980 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:52.980 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.980 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:52.980 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:52.980 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.980 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.980 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:52.980 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.980 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.980 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.980 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.980 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.980 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.980 00:16:53.239 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.239 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.239 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.239 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.239 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.239 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.239 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.239 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.239 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.239 { 00:16:53.239 "cntlid": 63, 00:16:53.239 "qid": 0, 00:16:53.239 "state": "enabled", 00:16:53.239 "thread": "nvmf_tgt_poll_group_000", 00:16:53.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:53.239 "listen_address": { 00:16:53.239 "trtype": "TCP", 00:16:53.239 "adrfam": "IPv4", 00:16:53.239 "traddr": "10.0.0.2", 00:16:53.239 "trsvcid": "4420" 00:16:53.239 }, 00:16:53.239 "peer_address": { 00:16:53.239 "trtype": "TCP", 00:16:53.239 "adrfam": "IPv4", 00:16:53.239 "traddr": "10.0.0.1", 00:16:53.239 "trsvcid": "45496" 00:16:53.239 }, 00:16:53.239 "auth": { 00:16:53.239 "state": "completed", 00:16:53.239 "digest": "sha384", 00:16:53.239 "dhgroup": "ffdhe2048" 00:16:53.239 } 00:16:53.239 } 00:16:53.239 ]' 00:16:53.239 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.239 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.239 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.498 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:53.498 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.498 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.498 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.498 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.758 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:16:53.758 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.327 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.328 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.328 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.328 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.328 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.587 00:16:54.587 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.587 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.587 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.847 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.847 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.847 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.847 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.847 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.847 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.847 { 00:16:54.847 "cntlid": 65, 00:16:54.847 "qid": 0, 00:16:54.847 "state": "enabled", 00:16:54.847 "thread": "nvmf_tgt_poll_group_000", 00:16:54.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:54.847 "listen_address": { 00:16:54.847 "trtype": "TCP", 00:16:54.847 "adrfam": "IPv4", 00:16:54.847 "traddr": "10.0.0.2", 00:16:54.847 "trsvcid": "4420" 00:16:54.847 }, 00:16:54.847 "peer_address": { 00:16:54.847 "trtype": "TCP", 00:16:54.847 "adrfam": "IPv4", 00:16:54.847 "traddr": "10.0.0.1", 00:16:54.847 "trsvcid": "57664" 00:16:54.847 }, 00:16:54.847 "auth": { 00:16:54.847 "state": "completed", 00:16:54.847 "digest": "sha384", 00:16:54.847 "dhgroup": "ffdhe3072" 00:16:54.847 } 00:16:54.847 } 00:16:54.847 ]' 00:16:54.847 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.847 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.847 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.847 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:54.847 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.107 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.107 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.107 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.107 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:55.107 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:16:55.675 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.675 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:55.675 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.675 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.675 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.675 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.675 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:55.675 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:55.935 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:55.935 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.935 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.935 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:55.935 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:55.936 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.936 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.936 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.936 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.936 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.936 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.936 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.936 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.194 00:16:56.194 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.194 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.194 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.518 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.518 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.518 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.518 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.518 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.518 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.518 { 00:16:56.518 "cntlid": 67, 00:16:56.518 "qid": 0, 00:16:56.518 "state": "enabled", 00:16:56.518 "thread": "nvmf_tgt_poll_group_000", 00:16:56.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:56.518 "listen_address": { 00:16:56.518 "trtype": "TCP", 00:16:56.518 "adrfam": "IPv4", 00:16:56.518 "traddr": "10.0.0.2", 00:16:56.518 "trsvcid": "4420" 00:16:56.518 }, 00:16:56.518 "peer_address": { 00:16:56.518 "trtype": "TCP", 00:16:56.518 "adrfam": "IPv4", 00:16:56.518 "traddr": "10.0.0.1", 00:16:56.518 "trsvcid": "57680" 00:16:56.518 }, 00:16:56.518 "auth": { 00:16:56.518 "state": "completed", 00:16:56.518 "digest": "sha384", 00:16:56.518 "dhgroup": "ffdhe3072" 00:16:56.518 } 00:16:56.518 } 00:16:56.518 ]' 00:16:56.518 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.518 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.518 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.518 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:56.518 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.518 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.518 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.518 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.803 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:56.803 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.460 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.461 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.737 00:16:57.737 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.737 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.737 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.995 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.995 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.995 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.995 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.995 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.995 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.995 { 00:16:57.995 "cntlid": 69, 00:16:57.995 "qid": 0, 00:16:57.995 "state": "enabled", 00:16:57.995 "thread": "nvmf_tgt_poll_group_000", 00:16:57.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:57.995 "listen_address": { 00:16:57.995 "trtype": "TCP", 00:16:57.995 "adrfam": "IPv4", 00:16:57.995 "traddr": "10.0.0.2", 00:16:57.995 "trsvcid": "4420" 00:16:57.995 }, 00:16:57.995 "peer_address": { 00:16:57.995 "trtype": "TCP", 00:16:57.995 "adrfam": "IPv4", 00:16:57.995 "traddr": "10.0.0.1", 00:16:57.995 "trsvcid": "57710" 00:16:57.995 }, 00:16:57.995 "auth": { 00:16:57.995 "state": "completed", 00:16:57.995 "digest": "sha384", 00:16:57.995 "dhgroup": "ffdhe3072" 00:16:57.995 } 00:16:57.995 } 00:16:57.995 ]' 00:16:57.995 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.995 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.995 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.255 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:58.255 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.255 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.255 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.255 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.255 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:58.255 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:16:58.825 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.825 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:58.825 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.825 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.084 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.084 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.084 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:59.084 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:59.084 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:59.084 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.084 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:59.084 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:59.084 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:59.084 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.084 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:59.084 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.084 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.084 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.084 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.084 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.084 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.342 00:16:59.342 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.342 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.342 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.601 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.601 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.601 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.601 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.601 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.601 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.601 { 00:16:59.601 "cntlid": 71, 00:16:59.601 "qid": 0, 00:16:59.601 "state": "enabled", 00:16:59.601 "thread": "nvmf_tgt_poll_group_000", 00:16:59.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:59.601 "listen_address": { 00:16:59.601 "trtype": "TCP", 00:16:59.601 "adrfam": "IPv4", 00:16:59.601 "traddr": "10.0.0.2", 00:16:59.601 "trsvcid": "4420" 00:16:59.601 }, 00:16:59.601 "peer_address": { 00:16:59.601 "trtype": "TCP", 00:16:59.601 "adrfam": "IPv4", 00:16:59.601 "traddr": "10.0.0.1", 00:16:59.601 "trsvcid": "57740" 00:16:59.601 }, 00:16:59.601 "auth": { 00:16:59.601 "state": "completed", 00:16:59.601 "digest": "sha384", 00:16:59.601 "dhgroup": "ffdhe3072" 00:16:59.601 } 00:16:59.601 } 00:16:59.601 ]' 00:16:59.601 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.601 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.601 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.601 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:59.860 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.860 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.860 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.860 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.860 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:16:59.860 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:00.428 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.428 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:00.428 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.428 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.428 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.428 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.428 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.428 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:00.428 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:00.687 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:00.687 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.687 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.687 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:00.687 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:00.687 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.687 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.687 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.687 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.687 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.687 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.687 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.687 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.946 00:17:00.946 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.946 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.946 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.205 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.205 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.205 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.205 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.205 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.205 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.205 { 00:17:01.205 "cntlid": 73, 00:17:01.205 "qid": 0, 00:17:01.205 "state": "enabled", 00:17:01.205 "thread": "nvmf_tgt_poll_group_000", 00:17:01.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:01.205 "listen_address": { 00:17:01.205 "trtype": "TCP", 00:17:01.205 "adrfam": "IPv4", 00:17:01.205 "traddr": "10.0.0.2", 00:17:01.205 "trsvcid": "4420" 00:17:01.205 }, 00:17:01.205 "peer_address": { 00:17:01.205 "trtype": "TCP", 00:17:01.205 "adrfam": "IPv4", 00:17:01.205 "traddr": "10.0.0.1", 00:17:01.205 "trsvcid": "57764" 00:17:01.205 }, 00:17:01.205 "auth": { 00:17:01.205 "state": "completed", 00:17:01.205 "digest": "sha384", 00:17:01.205 "dhgroup": "ffdhe4096" 00:17:01.205 } 00:17:01.205 } 00:17:01.205 ]' 00:17:01.205 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.205 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.205 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.205 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:01.205 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.463 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.463 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.463 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.463 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:01.463 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:02.030 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.030 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:02.030 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.030 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.030 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.030 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.030 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.030 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.289 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:02.289 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.289 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.289 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:02.289 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:02.289 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.289 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.289 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.289 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.289 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.289 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.289 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.289 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.548 00:17:02.548 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.548 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.548 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.807 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.807 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.807 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.807 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.807 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.807 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.807 { 00:17:02.807 "cntlid": 75, 00:17:02.807 "qid": 0, 00:17:02.807 "state": "enabled", 00:17:02.807 "thread": "nvmf_tgt_poll_group_000", 00:17:02.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:02.807 "listen_address": { 00:17:02.807 "trtype": "TCP", 00:17:02.807 "adrfam": "IPv4", 00:17:02.807 "traddr": "10.0.0.2", 00:17:02.807 "trsvcid": "4420" 00:17:02.807 }, 00:17:02.807 "peer_address": { 00:17:02.807 "trtype": "TCP", 00:17:02.807 "adrfam": "IPv4", 00:17:02.807 "traddr": "10.0.0.1", 00:17:02.807 "trsvcid": "57802" 00:17:02.807 }, 00:17:02.807 "auth": { 00:17:02.807 "state": "completed", 00:17:02.807 "digest": "sha384", 00:17:02.807 "dhgroup": "ffdhe4096" 00:17:02.807 } 00:17:02.807 } 00:17:02.807 ]' 00:17:02.807 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.807 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.807 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.807 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:02.807 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.066 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.066 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.066 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.066 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:03.066 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:03.633 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.633 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:03.633 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.633 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.633 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.633 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.633 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:03.633 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:03.892 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:03.892 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.892 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.892 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:03.892 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:03.892 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.892 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.892 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.892 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.892 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.892 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.892 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.892 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.151 00:17:04.151 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.151 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.151 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.411 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.411 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.411 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.411 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.411 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.411 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.411 { 00:17:04.411 "cntlid": 77, 00:17:04.411 "qid": 0, 00:17:04.411 "state": "enabled", 00:17:04.411 "thread": "nvmf_tgt_poll_group_000", 00:17:04.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:04.411 "listen_address": { 00:17:04.411 "trtype": "TCP", 00:17:04.411 "adrfam": "IPv4", 00:17:04.411 "traddr": "10.0.0.2", 00:17:04.411 "trsvcid": "4420" 00:17:04.411 }, 00:17:04.411 "peer_address": { 00:17:04.411 "trtype": "TCP", 00:17:04.411 "adrfam": "IPv4", 00:17:04.411 "traddr": "10.0.0.1", 00:17:04.411 "trsvcid": "57820" 00:17:04.411 }, 00:17:04.411 "auth": { 00:17:04.411 "state": "completed", 00:17:04.411 "digest": "sha384", 00:17:04.411 "dhgroup": "ffdhe4096" 00:17:04.411 } 00:17:04.411 } 00:17:04.411 ]' 00:17:04.411 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.411 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.411 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.411 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:04.411 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.670 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.670 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.670 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.670 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:04.670 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:05.242 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.242 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:05.242 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.242 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.242 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.242 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.242 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:05.242 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:05.500 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:05.501 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.501 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.501 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:05.501 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:05.501 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.501 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:05.501 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.501 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.501 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.501 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:05.501 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.501 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.759 00:17:05.759 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.759 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.759 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.019 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.019 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.019 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.019 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.019 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.019 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.019 { 00:17:06.019 "cntlid": 79, 00:17:06.019 "qid": 0, 00:17:06.019 "state": "enabled", 00:17:06.019 "thread": "nvmf_tgt_poll_group_000", 00:17:06.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:06.019 "listen_address": { 00:17:06.019 "trtype": "TCP", 00:17:06.019 "adrfam": "IPv4", 00:17:06.019 "traddr": "10.0.0.2", 00:17:06.019 "trsvcid": "4420" 00:17:06.019 }, 00:17:06.019 "peer_address": { 00:17:06.019 "trtype": "TCP", 00:17:06.019 "adrfam": "IPv4", 00:17:06.019 "traddr": "10.0.0.1", 00:17:06.019 "trsvcid": "37502" 00:17:06.019 }, 00:17:06.019 "auth": { 00:17:06.019 "state": "completed", 00:17:06.019 "digest": "sha384", 00:17:06.019 "dhgroup": "ffdhe4096" 00:17:06.019 } 00:17:06.019 } 00:17:06.019 ]' 00:17:06.019 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.019 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.019 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.019 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:06.019 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.278 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.278 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.278 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.278 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:06.278 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:06.846 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.846 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:06.846 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.846 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.846 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.846 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.846 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.846 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:06.847 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:07.105 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:07.105 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.105 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.105 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:07.105 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.105 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.105 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.105 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.105 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.105 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.105 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.105 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.106 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.365 00:17:07.365 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.365 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.365 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.623 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.623 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.623 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.623 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.623 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.623 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.623 { 00:17:07.623 "cntlid": 81, 00:17:07.623 "qid": 0, 00:17:07.623 "state": "enabled", 00:17:07.623 "thread": "nvmf_tgt_poll_group_000", 00:17:07.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:07.623 "listen_address": { 00:17:07.623 "trtype": "TCP", 00:17:07.623 "adrfam": "IPv4", 00:17:07.623 "traddr": "10.0.0.2", 00:17:07.623 "trsvcid": "4420" 00:17:07.623 }, 00:17:07.623 "peer_address": { 00:17:07.623 "trtype": "TCP", 00:17:07.623 "adrfam": "IPv4", 00:17:07.623 "traddr": "10.0.0.1", 00:17:07.623 "trsvcid": "37520" 00:17:07.623 }, 00:17:07.623 "auth": { 00:17:07.623 "state": "completed", 00:17:07.623 "digest": "sha384", 00:17:07.623 "dhgroup": "ffdhe6144" 00:17:07.623 } 00:17:07.623 } 00:17:07.623 ]' 00:17:07.623 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.623 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.623 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.882 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:07.882 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.882 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.882 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.882 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.141 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:08.141 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:08.709 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.709 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:08.709 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.709 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.709 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.709 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.709 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:08.709 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:08.709 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:08.709 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.709 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.709 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:08.709 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:08.709 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.709 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.709 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.709 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.709 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.709 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.709 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.709 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.277 00:17:09.277 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.277 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.277 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.277 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.277 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.277 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.277 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.277 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.277 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.277 { 00:17:09.277 "cntlid": 83, 00:17:09.277 "qid": 0, 00:17:09.277 "state": "enabled", 00:17:09.277 "thread": "nvmf_tgt_poll_group_000", 00:17:09.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:09.277 "listen_address": { 00:17:09.277 "trtype": "TCP", 00:17:09.277 "adrfam": "IPv4", 00:17:09.277 "traddr": "10.0.0.2", 00:17:09.277 "trsvcid": "4420" 00:17:09.277 }, 00:17:09.277 "peer_address": { 00:17:09.277 "trtype": "TCP", 00:17:09.277 "adrfam": "IPv4", 00:17:09.277 "traddr": "10.0.0.1", 00:17:09.277 "trsvcid": "37532" 00:17:09.277 }, 00:17:09.277 "auth": { 00:17:09.277 "state": "completed", 00:17:09.277 "digest": "sha384", 00:17:09.277 "dhgroup": "ffdhe6144" 00:17:09.277 } 00:17:09.277 } 00:17:09.277 ]' 00:17:09.277 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.536 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.536 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.536 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.536 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.536 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.536 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.536 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.795 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:09.795 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.362 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.929 00:17:10.929 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.929 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.929 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.929 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.929 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.929 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.929 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.929 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.929 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.929 { 00:17:10.929 "cntlid": 85, 00:17:10.929 "qid": 0, 00:17:10.929 "state": "enabled", 00:17:10.929 "thread": "nvmf_tgt_poll_group_000", 00:17:10.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:10.929 "listen_address": { 00:17:10.929 "trtype": "TCP", 00:17:10.929 "adrfam": "IPv4", 00:17:10.929 "traddr": "10.0.0.2", 00:17:10.929 "trsvcid": "4420" 00:17:10.929 }, 00:17:10.929 "peer_address": { 00:17:10.929 "trtype": "TCP", 00:17:10.929 "adrfam": "IPv4", 00:17:10.929 "traddr": "10.0.0.1", 00:17:10.929 "trsvcid": "37556" 00:17:10.929 }, 00:17:10.929 "auth": { 00:17:10.929 "state": "completed", 00:17:10.929 "digest": "sha384", 00:17:10.929 "dhgroup": "ffdhe6144" 00:17:10.929 } 00:17:10.929 } 00:17:10.929 ]' 00:17:10.929 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.188 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.188 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.188 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:11.188 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.188 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.188 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.188 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.447 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:11.447 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:12.015 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.274 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.533 00:17:12.533 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.533 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.533 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.792 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.792 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.792 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.792 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.792 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.793 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.793 { 00:17:12.793 "cntlid": 87, 00:17:12.793 "qid": 0, 00:17:12.793 "state": "enabled", 00:17:12.793 "thread": "nvmf_tgt_poll_group_000", 00:17:12.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:12.793 "listen_address": { 00:17:12.793 "trtype": "TCP", 00:17:12.793 "adrfam": "IPv4", 00:17:12.793 "traddr": "10.0.0.2", 00:17:12.793 "trsvcid": "4420" 00:17:12.793 }, 00:17:12.793 "peer_address": { 00:17:12.793 "trtype": "TCP", 00:17:12.793 "adrfam": "IPv4", 00:17:12.793 "traddr": "10.0.0.1", 00:17:12.793 "trsvcid": "37582" 00:17:12.793 }, 00:17:12.793 "auth": { 00:17:12.793 "state": "completed", 00:17:12.793 "digest": "sha384", 00:17:12.793 "dhgroup": "ffdhe6144" 00:17:12.793 } 00:17:12.793 } 00:17:12.793 ]' 00:17:12.793 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.793 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.793 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.793 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:12.793 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.793 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.793 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.793 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.051 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:13.051 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:13.619 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.619 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:13.619 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.619 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.619 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.619 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.619 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.619 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:13.619 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:13.878 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:13.878 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.878 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.878 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:13.878 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:13.878 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.878 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.878 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.878 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.878 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.878 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.878 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.878 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.446 00:17:14.446 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.446 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.446 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.446 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.446 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.446 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.446 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.446 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.446 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.446 { 00:17:14.446 "cntlid": 89, 00:17:14.446 "qid": 0, 00:17:14.446 "state": "enabled", 00:17:14.446 "thread": "nvmf_tgt_poll_group_000", 00:17:14.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:14.446 "listen_address": { 00:17:14.446 "trtype": "TCP", 00:17:14.446 "adrfam": "IPv4", 00:17:14.446 "traddr": "10.0.0.2", 00:17:14.446 "trsvcid": "4420" 00:17:14.446 }, 00:17:14.446 "peer_address": { 00:17:14.446 "trtype": "TCP", 00:17:14.446 "adrfam": "IPv4", 00:17:14.446 "traddr": "10.0.0.1", 00:17:14.446 "trsvcid": "37618" 00:17:14.446 }, 00:17:14.446 "auth": { 00:17:14.446 "state": "completed", 00:17:14.446 "digest": "sha384", 00:17:14.446 "dhgroup": "ffdhe8192" 00:17:14.446 } 00:17:14.446 } 00:17:14.446 ]' 00:17:14.446 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.446 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.446 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.705 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.705 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.705 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.705 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.705 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.705 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:14.705 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:15.271 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.271 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:15.271 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.271 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.271 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.529 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.529 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:15.529 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:15.529 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:15.529 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.529 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.529 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:15.529 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:15.529 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.529 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.529 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.529 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.529 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.529 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.529 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.529 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.096 00:17:16.096 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.096 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.096 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.355 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.355 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.355 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.355 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.355 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.355 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.355 { 00:17:16.355 "cntlid": 91, 00:17:16.355 "qid": 0, 00:17:16.355 "state": "enabled", 00:17:16.355 "thread": "nvmf_tgt_poll_group_000", 00:17:16.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:16.355 "listen_address": { 00:17:16.355 "trtype": "TCP", 00:17:16.355 "adrfam": "IPv4", 00:17:16.355 "traddr": "10.0.0.2", 00:17:16.355 "trsvcid": "4420" 00:17:16.355 }, 00:17:16.355 "peer_address": { 00:17:16.355 "trtype": "TCP", 00:17:16.355 "adrfam": "IPv4", 00:17:16.355 "traddr": "10.0.0.1", 00:17:16.355 "trsvcid": "45680" 00:17:16.355 }, 00:17:16.355 "auth": { 00:17:16.355 "state": "completed", 00:17:16.355 "digest": "sha384", 00:17:16.355 "dhgroup": "ffdhe8192" 00:17:16.355 } 00:17:16.355 } 00:17:16.355 ]' 00:17:16.355 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.355 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.355 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.355 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.355 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.355 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.355 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.355 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.613 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:16.613 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:17.179 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.179 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:17.179 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.179 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.179 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.179 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.179 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:17.179 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:17.439 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:17.439 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.439 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.439 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:17.439 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:17.439 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.439 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.439 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.439 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.439 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.439 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.439 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.439 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.007 00:17:18.007 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.007 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.007 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.007 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.007 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.007 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.007 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.007 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.007 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.007 { 00:17:18.007 "cntlid": 93, 00:17:18.007 "qid": 0, 00:17:18.007 "state": "enabled", 00:17:18.007 "thread": "nvmf_tgt_poll_group_000", 00:17:18.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:18.007 "listen_address": { 00:17:18.007 "trtype": "TCP", 00:17:18.007 "adrfam": "IPv4", 00:17:18.007 "traddr": "10.0.0.2", 00:17:18.007 "trsvcid": "4420" 00:17:18.007 }, 00:17:18.007 "peer_address": { 00:17:18.007 "trtype": "TCP", 00:17:18.007 "adrfam": "IPv4", 00:17:18.007 "traddr": "10.0.0.1", 00:17:18.007 "trsvcid": "45704" 00:17:18.007 }, 00:17:18.007 "auth": { 00:17:18.007 "state": "completed", 00:17:18.007 "digest": "sha384", 00:17:18.007 "dhgroup": "ffdhe8192" 00:17:18.007 } 00:17:18.007 } 00:17:18.007 ]' 00:17:18.007 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.266 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.266 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.266 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:18.266 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.266 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.266 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.266 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.525 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:18.525 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.093 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.661 00:17:19.661 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.661 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.661 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.920 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.920 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.920 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.920 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.920 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.920 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.920 { 00:17:19.920 "cntlid": 95, 00:17:19.920 "qid": 0, 00:17:19.920 "state": "enabled", 00:17:19.920 "thread": "nvmf_tgt_poll_group_000", 00:17:19.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:19.920 "listen_address": { 00:17:19.920 "trtype": "TCP", 00:17:19.920 "adrfam": "IPv4", 00:17:19.920 "traddr": "10.0.0.2", 00:17:19.920 "trsvcid": "4420" 00:17:19.920 }, 00:17:19.920 "peer_address": { 00:17:19.920 "trtype": "TCP", 00:17:19.920 "adrfam": "IPv4", 00:17:19.920 "traddr": "10.0.0.1", 00:17:19.920 "trsvcid": "45732" 00:17:19.920 }, 00:17:19.920 "auth": { 00:17:19.920 "state": "completed", 00:17:19.920 "digest": "sha384", 00:17:19.920 "dhgroup": "ffdhe8192" 00:17:19.920 } 00:17:19.920 } 00:17:19.920 ]' 00:17:19.920 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.920 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.920 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.920 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.920 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.920 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.920 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.920 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.179 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:20.179 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:20.746 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.746 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.746 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.746 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.746 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.746 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:20.746 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.746 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.746 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:20.746 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:21.006 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:21.006 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.006 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.006 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:21.006 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:21.006 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.006 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.006 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.006 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.006 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.006 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.006 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.006 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.265 00:17:21.265 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.265 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.265 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.524 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.524 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.524 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.524 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.524 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.524 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.524 { 00:17:21.524 "cntlid": 97, 00:17:21.524 "qid": 0, 00:17:21.524 "state": "enabled", 00:17:21.524 "thread": "nvmf_tgt_poll_group_000", 00:17:21.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:21.524 "listen_address": { 00:17:21.524 "trtype": "TCP", 00:17:21.524 "adrfam": "IPv4", 00:17:21.524 "traddr": "10.0.0.2", 00:17:21.524 "trsvcid": "4420" 00:17:21.524 }, 00:17:21.524 "peer_address": { 00:17:21.524 "trtype": "TCP", 00:17:21.524 "adrfam": "IPv4", 00:17:21.524 "traddr": "10.0.0.1", 00:17:21.524 "trsvcid": "45770" 00:17:21.524 }, 00:17:21.524 "auth": { 00:17:21.524 "state": "completed", 00:17:21.524 "digest": "sha512", 00:17:21.524 "dhgroup": "null" 00:17:21.524 } 00:17:21.524 } 00:17:21.524 ]' 00:17:21.524 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.524 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.524 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.524 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:21.524 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.524 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.524 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.524 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.783 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:21.783 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:22.351 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.351 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:22.351 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.351 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.351 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.351 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.351 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:22.351 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:22.610 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:22.610 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.610 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.610 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:22.610 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:22.610 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.610 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.610 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.610 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.610 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.610 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.610 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.610 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.869 00:17:22.869 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.869 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.869 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.128 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.128 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.128 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.128 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.128 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.128 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.128 { 00:17:23.128 "cntlid": 99, 00:17:23.128 "qid": 0, 00:17:23.128 "state": "enabled", 00:17:23.128 "thread": "nvmf_tgt_poll_group_000", 00:17:23.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:23.128 "listen_address": { 00:17:23.128 "trtype": "TCP", 00:17:23.128 "adrfam": "IPv4", 00:17:23.128 "traddr": "10.0.0.2", 00:17:23.128 "trsvcid": "4420" 00:17:23.128 }, 00:17:23.128 "peer_address": { 00:17:23.128 "trtype": "TCP", 00:17:23.128 "adrfam": "IPv4", 00:17:23.128 "traddr": "10.0.0.1", 00:17:23.128 "trsvcid": "45808" 00:17:23.128 }, 00:17:23.128 "auth": { 00:17:23.128 "state": "completed", 00:17:23.128 "digest": "sha512", 00:17:23.128 "dhgroup": "null" 00:17:23.128 } 00:17:23.128 } 00:17:23.128 ]' 00:17:23.128 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.128 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.128 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.128 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:23.128 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.128 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.128 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.128 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.387 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:23.387 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:23.954 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.954 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:23.954 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.954 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.954 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.954 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.954 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:23.954 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:24.214 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:24.214 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.214 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.214 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:24.214 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:24.214 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.214 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.214 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.214 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.214 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.214 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.214 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.214 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.472 00:17:24.472 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.472 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.472 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.731 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.731 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.731 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.731 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.731 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.731 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.731 { 00:17:24.731 "cntlid": 101, 00:17:24.731 "qid": 0, 00:17:24.731 "state": "enabled", 00:17:24.731 "thread": "nvmf_tgt_poll_group_000", 00:17:24.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:24.731 "listen_address": { 00:17:24.731 "trtype": "TCP", 00:17:24.731 "adrfam": "IPv4", 00:17:24.731 "traddr": "10.0.0.2", 00:17:24.731 "trsvcid": "4420" 00:17:24.731 }, 00:17:24.731 "peer_address": { 00:17:24.731 "trtype": "TCP", 00:17:24.731 "adrfam": "IPv4", 00:17:24.731 "traddr": "10.0.0.1", 00:17:24.731 "trsvcid": "47978" 00:17:24.731 }, 00:17:24.731 "auth": { 00:17:24.731 "state": "completed", 00:17:24.731 "digest": "sha512", 00:17:24.731 "dhgroup": "null" 00:17:24.731 } 00:17:24.731 } 00:17:24.731 ]' 00:17:24.731 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.731 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.731 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.731 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:24.731 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.731 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.731 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.731 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.995 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:24.995 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.664 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.923 00:17:25.923 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.923 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.923 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.182 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.182 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.182 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.182 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.182 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.182 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.182 { 00:17:26.182 "cntlid": 103, 00:17:26.182 "qid": 0, 00:17:26.182 "state": "enabled", 00:17:26.182 "thread": "nvmf_tgt_poll_group_000", 00:17:26.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:26.182 "listen_address": { 00:17:26.182 "trtype": "TCP", 00:17:26.182 "adrfam": "IPv4", 00:17:26.182 "traddr": "10.0.0.2", 00:17:26.182 "trsvcid": "4420" 00:17:26.182 }, 00:17:26.182 "peer_address": { 00:17:26.182 "trtype": "TCP", 00:17:26.182 "adrfam": "IPv4", 00:17:26.182 "traddr": "10.0.0.1", 00:17:26.182 "trsvcid": "47994" 00:17:26.182 }, 00:17:26.182 "auth": { 00:17:26.182 "state": "completed", 00:17:26.182 "digest": "sha512", 00:17:26.182 "dhgroup": "null" 00:17:26.182 } 00:17:26.182 } 00:17:26.182 ]' 00:17:26.182 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.182 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.182 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.182 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:26.182 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.182 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.182 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.182 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.440 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:26.441 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:27.008 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.008 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:27.008 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.008 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.008 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.008 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.008 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.008 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:27.008 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:27.267 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:27.267 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.267 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.267 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:27.267 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:27.267 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.267 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.267 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.267 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.267 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.267 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.267 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.267 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.526 00:17:27.526 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.526 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.526 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.785 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.785 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.785 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.785 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.785 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.785 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.785 { 00:17:27.785 "cntlid": 105, 00:17:27.785 "qid": 0, 00:17:27.785 "state": "enabled", 00:17:27.785 "thread": "nvmf_tgt_poll_group_000", 00:17:27.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:27.785 "listen_address": { 00:17:27.785 "trtype": "TCP", 00:17:27.785 "adrfam": "IPv4", 00:17:27.785 "traddr": "10.0.0.2", 00:17:27.785 "trsvcid": "4420" 00:17:27.785 }, 00:17:27.785 "peer_address": { 00:17:27.785 "trtype": "TCP", 00:17:27.785 "adrfam": "IPv4", 00:17:27.785 "traddr": "10.0.0.1", 00:17:27.785 "trsvcid": "48024" 00:17:27.785 }, 00:17:27.785 "auth": { 00:17:27.785 "state": "completed", 00:17:27.785 "digest": "sha512", 00:17:27.785 "dhgroup": "ffdhe2048" 00:17:27.785 } 00:17:27.785 } 00:17:27.785 ]' 00:17:27.785 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.785 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.785 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.785 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:27.785 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.785 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.785 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.785 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.044 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:28.044 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:28.610 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.610 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:28.610 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.611 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.611 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.611 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.611 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:28.611 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:28.869 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:28.869 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.869 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:28.869 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:28.869 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:28.869 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.869 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.869 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.869 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.869 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.869 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.870 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.870 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.128 00:17:29.128 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.128 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.128 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.387 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.387 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.387 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.387 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.387 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.387 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.387 { 00:17:29.387 "cntlid": 107, 00:17:29.387 "qid": 0, 00:17:29.387 "state": "enabled", 00:17:29.387 "thread": "nvmf_tgt_poll_group_000", 00:17:29.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:29.387 "listen_address": { 00:17:29.387 "trtype": "TCP", 00:17:29.387 "adrfam": "IPv4", 00:17:29.387 "traddr": "10.0.0.2", 00:17:29.387 "trsvcid": "4420" 00:17:29.387 }, 00:17:29.387 "peer_address": { 00:17:29.387 "trtype": "TCP", 00:17:29.388 "adrfam": "IPv4", 00:17:29.388 "traddr": "10.0.0.1", 00:17:29.388 "trsvcid": "48052" 00:17:29.388 }, 00:17:29.388 "auth": { 00:17:29.388 "state": "completed", 00:17:29.388 "digest": "sha512", 00:17:29.388 "dhgroup": "ffdhe2048" 00:17:29.388 } 00:17:29.388 } 00:17:29.388 ]' 00:17:29.388 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.388 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.388 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.388 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:29.388 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.388 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.388 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.388 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.646 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:29.646 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:30.215 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.215 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:30.215 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.215 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.215 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.215 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.215 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:30.215 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:30.474 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:30.474 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.474 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.474 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:30.474 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:30.474 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.474 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.474 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.474 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.474 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.474 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.474 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.474 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.733 00:17:30.733 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.733 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.733 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.993 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.993 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.993 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.993 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.993 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.993 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.993 { 00:17:30.993 "cntlid": 109, 00:17:30.993 "qid": 0, 00:17:30.993 "state": "enabled", 00:17:30.993 "thread": "nvmf_tgt_poll_group_000", 00:17:30.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:30.993 "listen_address": { 00:17:30.993 "trtype": "TCP", 00:17:30.993 "adrfam": "IPv4", 00:17:30.993 "traddr": "10.0.0.2", 00:17:30.993 "trsvcid": "4420" 00:17:30.993 }, 00:17:30.993 "peer_address": { 00:17:30.993 "trtype": "TCP", 00:17:30.993 "adrfam": "IPv4", 00:17:30.993 "traddr": "10.0.0.1", 00:17:30.993 "trsvcid": "48080" 00:17:30.993 }, 00:17:30.993 "auth": { 00:17:30.993 "state": "completed", 00:17:30.993 "digest": "sha512", 00:17:30.993 "dhgroup": "ffdhe2048" 00:17:30.993 } 00:17:30.993 } 00:17:30.993 ]' 00:17:30.993 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.993 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.993 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.993 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.993 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.993 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.993 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.993 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.252 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:31.252 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:31.820 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.820 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:31.820 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.820 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.820 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.820 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.820 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:31.820 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.079 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:32.079 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.079 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.079 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:32.079 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:32.079 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.079 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:32.079 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.079 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.079 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.079 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:32.079 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.079 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.339 00:17:32.339 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.339 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.339 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.598 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.598 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.598 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.598 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.598 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.598 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.598 { 00:17:32.598 "cntlid": 111, 00:17:32.598 "qid": 0, 00:17:32.598 "state": "enabled", 00:17:32.598 "thread": "nvmf_tgt_poll_group_000", 00:17:32.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:32.598 "listen_address": { 00:17:32.598 "trtype": "TCP", 00:17:32.598 "adrfam": "IPv4", 00:17:32.598 "traddr": "10.0.0.2", 00:17:32.598 "trsvcid": "4420" 00:17:32.598 }, 00:17:32.598 "peer_address": { 00:17:32.598 "trtype": "TCP", 00:17:32.598 "adrfam": "IPv4", 00:17:32.598 "traddr": "10.0.0.1", 00:17:32.598 "trsvcid": "48094" 00:17:32.598 }, 00:17:32.598 "auth": { 00:17:32.598 "state": "completed", 00:17:32.598 "digest": "sha512", 00:17:32.598 "dhgroup": "ffdhe2048" 00:17:32.598 } 00:17:32.598 } 00:17:32.598 ]' 00:17:32.598 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.598 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.598 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.598 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:32.598 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.598 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.598 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.598 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.857 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:32.857 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.479 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.738 00:17:33.738 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.738 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.738 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.997 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.997 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.997 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.997 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.997 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.997 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.997 { 00:17:33.997 "cntlid": 113, 00:17:33.997 "qid": 0, 00:17:33.997 "state": "enabled", 00:17:33.997 "thread": "nvmf_tgt_poll_group_000", 00:17:33.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:33.997 "listen_address": { 00:17:33.997 "trtype": "TCP", 00:17:33.997 "adrfam": "IPv4", 00:17:33.997 "traddr": "10.0.0.2", 00:17:33.997 "trsvcid": "4420" 00:17:33.997 }, 00:17:33.997 "peer_address": { 00:17:33.997 "trtype": "TCP", 00:17:33.997 "adrfam": "IPv4", 00:17:33.997 "traddr": "10.0.0.1", 00:17:33.997 "trsvcid": "48114" 00:17:33.997 }, 00:17:33.997 "auth": { 00:17:33.997 "state": "completed", 00:17:33.997 "digest": "sha512", 00:17:33.997 "dhgroup": "ffdhe3072" 00:17:33.997 } 00:17:33.997 } 00:17:33.997 ]' 00:17:33.997 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.997 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.997 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.255 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:34.255 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.255 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.255 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.255 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.514 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:34.514 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.081 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.339 00:17:35.339 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.339 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.339 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.598 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.598 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.598 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.598 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.598 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.598 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.598 { 00:17:35.598 "cntlid": 115, 00:17:35.598 "qid": 0, 00:17:35.598 "state": "enabled", 00:17:35.598 "thread": "nvmf_tgt_poll_group_000", 00:17:35.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:35.598 "listen_address": { 00:17:35.598 "trtype": "TCP", 00:17:35.598 "adrfam": "IPv4", 00:17:35.598 "traddr": "10.0.0.2", 00:17:35.598 "trsvcid": "4420" 00:17:35.598 }, 00:17:35.598 "peer_address": { 00:17:35.598 "trtype": "TCP", 00:17:35.598 "adrfam": "IPv4", 00:17:35.598 "traddr": "10.0.0.1", 00:17:35.598 "trsvcid": "45724" 00:17:35.598 }, 00:17:35.598 "auth": { 00:17:35.598 "state": "completed", 00:17:35.598 "digest": "sha512", 00:17:35.598 "dhgroup": "ffdhe3072" 00:17:35.598 } 00:17:35.598 } 00:17:35.598 ]' 00:17:35.598 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.598 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.598 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.598 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.598 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.857 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.857 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.857 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.857 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:35.857 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:36.426 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.426 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:36.426 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.426 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.426 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.426 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.426 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:36.426 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:36.685 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:36.685 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.685 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.685 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:36.685 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:36.685 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.685 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.685 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.685 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.685 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.685 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.685 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.685 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.944 00:17:36.944 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.944 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.944 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.202 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.202 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.202 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.202 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.202 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.202 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.202 { 00:17:37.202 "cntlid": 117, 00:17:37.202 "qid": 0, 00:17:37.202 "state": "enabled", 00:17:37.202 "thread": "nvmf_tgt_poll_group_000", 00:17:37.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:37.202 "listen_address": { 00:17:37.202 "trtype": "TCP", 00:17:37.202 "adrfam": "IPv4", 00:17:37.202 "traddr": "10.0.0.2", 00:17:37.202 "trsvcid": "4420" 00:17:37.202 }, 00:17:37.202 "peer_address": { 00:17:37.202 "trtype": "TCP", 00:17:37.202 "adrfam": "IPv4", 00:17:37.202 "traddr": "10.0.0.1", 00:17:37.202 "trsvcid": "45768" 00:17:37.202 }, 00:17:37.202 "auth": { 00:17:37.202 "state": "completed", 00:17:37.202 "digest": "sha512", 00:17:37.202 "dhgroup": "ffdhe3072" 00:17:37.202 } 00:17:37.202 } 00:17:37.202 ]' 00:17:37.202 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.202 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.202 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.202 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:37.202 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.461 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.461 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.461 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.461 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:37.461 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:38.028 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.028 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:38.028 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.028 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.028 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.028 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.028 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.028 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.288 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:38.288 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.288 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.288 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:38.288 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:38.288 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.288 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:38.288 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.288 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.288 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.288 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:38.288 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.288 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.547 00:17:38.547 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.547 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.547 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.805 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.805 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.805 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.805 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.805 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.805 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.805 { 00:17:38.805 "cntlid": 119, 00:17:38.805 "qid": 0, 00:17:38.805 "state": "enabled", 00:17:38.805 "thread": "nvmf_tgt_poll_group_000", 00:17:38.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:38.805 "listen_address": { 00:17:38.805 "trtype": "TCP", 00:17:38.805 "adrfam": "IPv4", 00:17:38.805 "traddr": "10.0.0.2", 00:17:38.805 "trsvcid": "4420" 00:17:38.805 }, 00:17:38.805 "peer_address": { 00:17:38.805 "trtype": "TCP", 00:17:38.805 "adrfam": "IPv4", 00:17:38.805 "traddr": "10.0.0.1", 00:17:38.805 "trsvcid": "45794" 00:17:38.805 }, 00:17:38.805 "auth": { 00:17:38.805 "state": "completed", 00:17:38.805 "digest": "sha512", 00:17:38.805 "dhgroup": "ffdhe3072" 00:17:38.805 } 00:17:38.805 } 00:17:38.805 ]' 00:17:38.805 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.805 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.805 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.805 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.805 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.805 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.805 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.805 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.063 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:39.063 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:39.630 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.630 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:39.630 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.630 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.630 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.630 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.630 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.630 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:39.630 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:39.889 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:39.889 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.889 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.889 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:39.889 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:39.889 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.889 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.889 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.889 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.889 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.889 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.889 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.889 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.148 00:17:40.148 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.148 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.148 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.407 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.407 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.407 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.407 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.407 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.407 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.407 { 00:17:40.407 "cntlid": 121, 00:17:40.407 "qid": 0, 00:17:40.407 "state": "enabled", 00:17:40.407 "thread": "nvmf_tgt_poll_group_000", 00:17:40.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:40.407 "listen_address": { 00:17:40.407 "trtype": "TCP", 00:17:40.407 "adrfam": "IPv4", 00:17:40.407 "traddr": "10.0.0.2", 00:17:40.407 "trsvcid": "4420" 00:17:40.407 }, 00:17:40.407 "peer_address": { 00:17:40.407 "trtype": "TCP", 00:17:40.407 "adrfam": "IPv4", 00:17:40.407 "traddr": "10.0.0.1", 00:17:40.407 "trsvcid": "45816" 00:17:40.407 }, 00:17:40.407 "auth": { 00:17:40.407 "state": "completed", 00:17:40.407 "digest": "sha512", 00:17:40.407 "dhgroup": "ffdhe4096" 00:17:40.407 } 00:17:40.407 } 00:17:40.407 ]' 00:17:40.407 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.407 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.407 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.407 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:40.407 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.666 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.666 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.666 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.666 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:40.666 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:41.233 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.233 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:41.233 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.233 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.233 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.233 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.233 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:41.233 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:41.491 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:41.491 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.491 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.491 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:41.491 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:41.491 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.491 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.491 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.491 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.491 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.491 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.491 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.491 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.750 00:17:41.750 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.750 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.750 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.008 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.008 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.008 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.008 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.008 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.008 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.008 { 00:17:42.008 "cntlid": 123, 00:17:42.008 "qid": 0, 00:17:42.008 "state": "enabled", 00:17:42.008 "thread": "nvmf_tgt_poll_group_000", 00:17:42.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:42.008 "listen_address": { 00:17:42.008 "trtype": "TCP", 00:17:42.008 "adrfam": "IPv4", 00:17:42.008 "traddr": "10.0.0.2", 00:17:42.008 "trsvcid": "4420" 00:17:42.008 }, 00:17:42.008 "peer_address": { 00:17:42.008 "trtype": "TCP", 00:17:42.008 "adrfam": "IPv4", 00:17:42.008 "traddr": "10.0.0.1", 00:17:42.008 "trsvcid": "45848" 00:17:42.008 }, 00:17:42.008 "auth": { 00:17:42.008 "state": "completed", 00:17:42.008 "digest": "sha512", 00:17:42.008 "dhgroup": "ffdhe4096" 00:17:42.008 } 00:17:42.008 } 00:17:42.008 ]' 00:17:42.008 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.008 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.008 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.008 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:42.008 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.267 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.267 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.267 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.267 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:42.267 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:42.834 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.834 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:42.834 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.834 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.834 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.834 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.834 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:42.834 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:43.094 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:43.094 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.094 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.094 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:43.094 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:43.094 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.094 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.094 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.094 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.094 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.094 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.094 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.094 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.353 00:17:43.353 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.353 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.353 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.612 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.612 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.612 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.612 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.612 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.612 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.612 { 00:17:43.612 "cntlid": 125, 00:17:43.612 "qid": 0, 00:17:43.612 "state": "enabled", 00:17:43.612 "thread": "nvmf_tgt_poll_group_000", 00:17:43.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:43.612 "listen_address": { 00:17:43.612 "trtype": "TCP", 00:17:43.612 "adrfam": "IPv4", 00:17:43.612 "traddr": "10.0.0.2", 00:17:43.612 "trsvcid": "4420" 00:17:43.612 }, 00:17:43.612 "peer_address": { 00:17:43.612 "trtype": "TCP", 00:17:43.612 "adrfam": "IPv4", 00:17:43.612 "traddr": "10.0.0.1", 00:17:43.612 "trsvcid": "45892" 00:17:43.612 }, 00:17:43.612 "auth": { 00:17:43.612 "state": "completed", 00:17:43.612 "digest": "sha512", 00:17:43.612 "dhgroup": "ffdhe4096" 00:17:43.612 } 00:17:43.612 } 00:17:43.612 ]' 00:17:43.612 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.612 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.612 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.612 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.612 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.871 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.871 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.871 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.871 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:43.871 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:44.438 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.438 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:44.438 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.438 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.438 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.438 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.438 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:44.438 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:44.697 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:44.697 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.697 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.697 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:44.697 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:44.697 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.697 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:44.697 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.697 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.697 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.697 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:44.697 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.697 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.956 00:17:44.956 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.956 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.956 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.218 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.218 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.218 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.218 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.218 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.218 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.218 { 00:17:45.218 "cntlid": 127, 00:17:45.218 "qid": 0, 00:17:45.218 "state": "enabled", 00:17:45.218 "thread": "nvmf_tgt_poll_group_000", 00:17:45.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:45.218 "listen_address": { 00:17:45.218 "trtype": "TCP", 00:17:45.218 "adrfam": "IPv4", 00:17:45.218 "traddr": "10.0.0.2", 00:17:45.218 "trsvcid": "4420" 00:17:45.218 }, 00:17:45.218 "peer_address": { 00:17:45.218 "trtype": "TCP", 00:17:45.218 "adrfam": "IPv4", 00:17:45.218 "traddr": "10.0.0.1", 00:17:45.218 "trsvcid": "34620" 00:17:45.218 }, 00:17:45.218 "auth": { 00:17:45.218 "state": "completed", 00:17:45.218 "digest": "sha512", 00:17:45.218 "dhgroup": "ffdhe4096" 00:17:45.218 } 00:17:45.218 } 00:17:45.218 ]' 00:17:45.218 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.218 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.218 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.218 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:45.218 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.218 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.218 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.218 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.478 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:45.478 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:46.046 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.046 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:46.046 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.046 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.046 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.046 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.046 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.046 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:46.046 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:46.305 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:46.305 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.305 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.305 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:46.305 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:46.305 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.305 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.305 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.305 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.305 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.305 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.305 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.305 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.565 00:17:46.565 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.565 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.565 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.823 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.823 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.823 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.823 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.823 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.824 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.824 { 00:17:46.824 "cntlid": 129, 00:17:46.824 "qid": 0, 00:17:46.824 "state": "enabled", 00:17:46.824 "thread": "nvmf_tgt_poll_group_000", 00:17:46.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:46.824 "listen_address": { 00:17:46.824 "trtype": "TCP", 00:17:46.824 "adrfam": "IPv4", 00:17:46.824 "traddr": "10.0.0.2", 00:17:46.824 "trsvcid": "4420" 00:17:46.824 }, 00:17:46.824 "peer_address": { 00:17:46.824 "trtype": "TCP", 00:17:46.824 "adrfam": "IPv4", 00:17:46.824 "traddr": "10.0.0.1", 00:17:46.824 "trsvcid": "34650" 00:17:46.824 }, 00:17:46.824 "auth": { 00:17:46.824 "state": "completed", 00:17:46.824 "digest": "sha512", 00:17:46.824 "dhgroup": "ffdhe6144" 00:17:46.824 } 00:17:46.824 } 00:17:46.824 ]' 00:17:46.824 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.824 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.824 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.824 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:46.824 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.083 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.083 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.083 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.083 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:47.083 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:47.650 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.650 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:47.650 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.650 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.650 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.650 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.650 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:47.650 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:47.909 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:47.909 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.909 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.909 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:47.909 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:47.909 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.909 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.909 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.909 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.909 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.909 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.909 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.909 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.477 00:17:48.477 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.477 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.477 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.477 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.477 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.477 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.477 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.477 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.477 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.477 { 00:17:48.477 "cntlid": 131, 00:17:48.477 "qid": 0, 00:17:48.477 "state": "enabled", 00:17:48.477 "thread": "nvmf_tgt_poll_group_000", 00:17:48.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:48.477 "listen_address": { 00:17:48.477 "trtype": "TCP", 00:17:48.477 "adrfam": "IPv4", 00:17:48.477 "traddr": "10.0.0.2", 00:17:48.477 "trsvcid": "4420" 00:17:48.477 }, 00:17:48.477 "peer_address": { 00:17:48.477 "trtype": "TCP", 00:17:48.477 "adrfam": "IPv4", 00:17:48.477 "traddr": "10.0.0.1", 00:17:48.477 "trsvcid": "34678" 00:17:48.477 }, 00:17:48.477 "auth": { 00:17:48.477 "state": "completed", 00:17:48.477 "digest": "sha512", 00:17:48.477 "dhgroup": "ffdhe6144" 00:17:48.477 } 00:17:48.477 } 00:17:48.477 ]' 00:17:48.477 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.477 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.477 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.736 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:48.736 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.736 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.736 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.736 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.736 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:48.736 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:49.307 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.307 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:49.307 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.307 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.566 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.566 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.566 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:49.566 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:49.566 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:49.566 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.566 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.566 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:49.566 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:49.567 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.567 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.567 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.567 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.567 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.567 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.567 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.567 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.135 00:17:50.135 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.135 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.135 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.135 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.135 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.135 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.135 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.394 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.394 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.394 { 00:17:50.394 "cntlid": 133, 00:17:50.394 "qid": 0, 00:17:50.394 "state": "enabled", 00:17:50.394 "thread": "nvmf_tgt_poll_group_000", 00:17:50.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:50.394 "listen_address": { 00:17:50.394 "trtype": "TCP", 00:17:50.394 "adrfam": "IPv4", 00:17:50.394 "traddr": "10.0.0.2", 00:17:50.394 "trsvcid": "4420" 00:17:50.394 }, 00:17:50.394 "peer_address": { 00:17:50.394 "trtype": "TCP", 00:17:50.394 "adrfam": "IPv4", 00:17:50.394 "traddr": "10.0.0.1", 00:17:50.394 "trsvcid": "34690" 00:17:50.394 }, 00:17:50.394 "auth": { 00:17:50.394 "state": "completed", 00:17:50.394 "digest": "sha512", 00:17:50.394 "dhgroup": "ffdhe6144" 00:17:50.394 } 00:17:50.394 } 00:17:50.394 ]' 00:17:50.394 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.394 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.394 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.394 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:50.394 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.394 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.394 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.394 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.653 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:50.653 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:51.222 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.222 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:51.222 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.222 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.222 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.222 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.222 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:51.222 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:51.482 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:51.482 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.482 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.482 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:51.482 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:51.482 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.482 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:51.482 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.482 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.482 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.482 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:51.482 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.482 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.741 00:17:51.741 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.741 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.741 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.000 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.000 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.000 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.000 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.000 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.000 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.000 { 00:17:52.000 "cntlid": 135, 00:17:52.000 "qid": 0, 00:17:52.000 "state": "enabled", 00:17:52.000 "thread": "nvmf_tgt_poll_group_000", 00:17:52.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:52.000 "listen_address": { 00:17:52.000 "trtype": "TCP", 00:17:52.000 "adrfam": "IPv4", 00:17:52.000 "traddr": "10.0.0.2", 00:17:52.000 "trsvcid": "4420" 00:17:52.000 }, 00:17:52.000 "peer_address": { 00:17:52.000 "trtype": "TCP", 00:17:52.000 "adrfam": "IPv4", 00:17:52.000 "traddr": "10.0.0.1", 00:17:52.000 "trsvcid": "34726" 00:17:52.000 }, 00:17:52.000 "auth": { 00:17:52.000 "state": "completed", 00:17:52.000 "digest": "sha512", 00:17:52.000 "dhgroup": "ffdhe6144" 00:17:52.000 } 00:17:52.000 } 00:17:52.000 ]' 00:17:52.000 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.000 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.000 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.000 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:52.000 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.000 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.000 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.000 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.259 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:52.259 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:52.826 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.827 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:52.827 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.827 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.827 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.827 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.827 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.827 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:52.827 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:53.086 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:53.086 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.086 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.086 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:53.086 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:53.086 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.086 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.086 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.086 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.086 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.086 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.086 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.086 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.653 00:17:53.653 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.653 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.653 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.653 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.653 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.653 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.653 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.653 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.653 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.653 { 00:17:53.653 "cntlid": 137, 00:17:53.653 "qid": 0, 00:17:53.653 "state": "enabled", 00:17:53.653 "thread": "nvmf_tgt_poll_group_000", 00:17:53.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:53.653 "listen_address": { 00:17:53.653 "trtype": "TCP", 00:17:53.653 "adrfam": "IPv4", 00:17:53.653 "traddr": "10.0.0.2", 00:17:53.653 "trsvcid": "4420" 00:17:53.653 }, 00:17:53.653 "peer_address": { 00:17:53.653 "trtype": "TCP", 00:17:53.653 "adrfam": "IPv4", 00:17:53.653 "traddr": "10.0.0.1", 00:17:53.653 "trsvcid": "34748" 00:17:53.653 }, 00:17:53.653 "auth": { 00:17:53.653 "state": "completed", 00:17:53.653 "digest": "sha512", 00:17:53.653 "dhgroup": "ffdhe8192" 00:17:53.653 } 00:17:53.653 } 00:17:53.653 ]' 00:17:53.653 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.912 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.912 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.912 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:53.912 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.912 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.912 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.912 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.171 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:54.171 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:17:54.738 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.738 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:54.738 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.738 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.738 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.738 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.738 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:54.738 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:54.738 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:54.738 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.738 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.738 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:54.738 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:54.738 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.738 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.738 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.738 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.997 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.997 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.997 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.997 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.255 00:17:55.255 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.255 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.255 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.513 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.513 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.513 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.513 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.513 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.513 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.513 { 00:17:55.513 "cntlid": 139, 00:17:55.513 "qid": 0, 00:17:55.513 "state": "enabled", 00:17:55.513 "thread": "nvmf_tgt_poll_group_000", 00:17:55.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:55.513 "listen_address": { 00:17:55.513 "trtype": "TCP", 00:17:55.513 "adrfam": "IPv4", 00:17:55.513 "traddr": "10.0.0.2", 00:17:55.513 "trsvcid": "4420" 00:17:55.513 }, 00:17:55.513 "peer_address": { 00:17:55.513 "trtype": "TCP", 00:17:55.513 "adrfam": "IPv4", 00:17:55.513 "traddr": "10.0.0.1", 00:17:55.513 "trsvcid": "34664" 00:17:55.513 }, 00:17:55.513 "auth": { 00:17:55.513 "state": "completed", 00:17:55.513 "digest": "sha512", 00:17:55.513 "dhgroup": "ffdhe8192" 00:17:55.513 } 00:17:55.513 } 00:17:55.513 ]' 00:17:55.513 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.513 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.513 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.772 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:55.772 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.772 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.772 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.772 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.772 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:55.772 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: --dhchap-ctrl-secret DHHC-1:02:OTBmZmUxMzNhOTk1Y2Q4Zjc0NzQxMTUxZmFlOTlmNzBkMzljMDk0NzAwMWVhMTNl4nuF1Q==: 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.710 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.279 00:17:57.279 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.279 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.279 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.279 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.279 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.279 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.279 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.537 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.538 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.538 { 00:17:57.538 "cntlid": 141, 00:17:57.538 "qid": 0, 00:17:57.538 "state": "enabled", 00:17:57.538 "thread": "nvmf_tgt_poll_group_000", 00:17:57.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:57.538 "listen_address": { 00:17:57.538 "trtype": "TCP", 00:17:57.538 "adrfam": "IPv4", 00:17:57.538 "traddr": "10.0.0.2", 00:17:57.538 "trsvcid": "4420" 00:17:57.538 }, 00:17:57.538 "peer_address": { 00:17:57.538 "trtype": "TCP", 00:17:57.538 "adrfam": "IPv4", 00:17:57.538 "traddr": "10.0.0.1", 00:17:57.538 "trsvcid": "34700" 00:17:57.538 }, 00:17:57.538 "auth": { 00:17:57.538 "state": "completed", 00:17:57.538 "digest": "sha512", 00:17:57.538 "dhgroup": "ffdhe8192" 00:17:57.538 } 00:17:57.538 } 00:17:57.538 ]' 00:17:57.538 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.538 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.538 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.538 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:57.538 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.538 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.538 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.538 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.796 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:57.796 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:01:YzRiMmZhODI0OGE3ZWUzYWE4ODQyNTM0ZjNjMzBiYjXiD4SM: 00:17:58.364 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.364 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:58.364 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.364 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.364 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.364 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.364 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:58.364 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:58.623 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:58.623 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.623 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.623 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:58.623 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:58.623 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.623 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:58.623 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.623 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.623 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.623 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:58.623 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.623 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.881 00:17:59.140 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.140 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.140 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.140 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.140 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.140 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.140 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.140 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.140 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.140 { 00:17:59.140 "cntlid": 143, 00:17:59.140 "qid": 0, 00:17:59.140 "state": "enabled", 00:17:59.140 "thread": "nvmf_tgt_poll_group_000", 00:17:59.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:59.140 "listen_address": { 00:17:59.140 "trtype": "TCP", 00:17:59.140 "adrfam": "IPv4", 00:17:59.140 "traddr": "10.0.0.2", 00:17:59.140 "trsvcid": "4420" 00:17:59.140 }, 00:17:59.140 "peer_address": { 00:17:59.140 "trtype": "TCP", 00:17:59.140 "adrfam": "IPv4", 00:17:59.140 "traddr": "10.0.0.1", 00:17:59.140 "trsvcid": "34732" 00:17:59.140 }, 00:17:59.140 "auth": { 00:17:59.140 "state": "completed", 00:17:59.140 "digest": "sha512", 00:17:59.140 "dhgroup": "ffdhe8192" 00:17:59.140 } 00:17:59.140 } 00:17:59.140 ]' 00:17:59.140 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.140 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.140 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.399 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:59.399 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.399 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.399 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.400 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.659 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:17:59.659 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.227 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.795 00:18:00.795 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.795 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.795 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.054 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.054 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.054 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.054 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.054 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.054 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.054 { 00:18:01.054 "cntlid": 145, 00:18:01.054 "qid": 0, 00:18:01.054 "state": "enabled", 00:18:01.054 "thread": "nvmf_tgt_poll_group_000", 00:18:01.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:01.054 "listen_address": { 00:18:01.054 "trtype": "TCP", 00:18:01.054 "adrfam": "IPv4", 00:18:01.054 "traddr": "10.0.0.2", 00:18:01.054 "trsvcid": "4420" 00:18:01.054 }, 00:18:01.054 "peer_address": { 00:18:01.054 "trtype": "TCP", 00:18:01.054 "adrfam": "IPv4", 00:18:01.054 "traddr": "10.0.0.1", 00:18:01.054 "trsvcid": "34756" 00:18:01.054 }, 00:18:01.054 "auth": { 00:18:01.054 "state": "completed", 00:18:01.054 "digest": "sha512", 00:18:01.054 "dhgroup": "ffdhe8192" 00:18:01.054 } 00:18:01.054 } 00:18:01.054 ]' 00:18:01.054 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.054 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.054 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.054 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:01.054 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.054 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.054 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.054 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.313 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:18:01.313 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjNlYTI1MTRlYWMwMGEyNDhjOGVlOTU0NDA2ZTM3OGIzMTdhZmI3NjRjNzY3ZGRh2EUr6g==: --dhchap-ctrl-secret DHHC-1:03:ZDFhYjg5MGYzNTJlZTAwZmVhN2JiNjQ5YTQxMTQ3Zjk5ODUzMmNiOTlkYTczZDJkYzgwMGU5MjY2ODZmMGNlMeR4mMg=: 00:18:01.880 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.880 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:01.880 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.880 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.880 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.880 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:01.880 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.881 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.881 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.881 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:01.881 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:01.881 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:01.881 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:01.881 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.881 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:01.881 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.881 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:01.881 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:01.881 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:02.448 request: 00:18:02.448 { 00:18:02.448 "name": "nvme0", 00:18:02.448 "trtype": "tcp", 00:18:02.448 "traddr": "10.0.0.2", 00:18:02.448 "adrfam": "ipv4", 00:18:02.448 "trsvcid": "4420", 00:18:02.448 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:02.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:02.448 "prchk_reftag": false, 00:18:02.448 "prchk_guard": false, 00:18:02.448 "hdgst": false, 00:18:02.448 "ddgst": false, 00:18:02.448 "dhchap_key": "key2", 00:18:02.448 "allow_unrecognized_csi": false, 00:18:02.448 "method": "bdev_nvme_attach_controller", 00:18:02.448 "req_id": 1 00:18:02.448 } 00:18:02.448 Got JSON-RPC error response 00:18:02.448 response: 00:18:02.448 { 00:18:02.448 "code": -5, 00:18:02.448 "message": "Input/output error" 00:18:02.448 } 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:02.448 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:02.708 request: 00:18:02.708 { 00:18:02.708 "name": "nvme0", 00:18:02.708 "trtype": "tcp", 00:18:02.708 "traddr": "10.0.0.2", 00:18:02.708 "adrfam": "ipv4", 00:18:02.708 "trsvcid": "4420", 00:18:02.708 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:02.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:02.708 "prchk_reftag": false, 00:18:02.708 "prchk_guard": false, 00:18:02.708 "hdgst": false, 00:18:02.708 "ddgst": false, 00:18:02.708 "dhchap_key": "key1", 00:18:02.708 "dhchap_ctrlr_key": "ckey2", 00:18:02.708 "allow_unrecognized_csi": false, 00:18:02.708 "method": "bdev_nvme_attach_controller", 00:18:02.708 "req_id": 1 00:18:02.708 } 00:18:02.708 Got JSON-RPC error response 00:18:02.708 response: 00:18:02.708 { 00:18:02.708 "code": -5, 00:18:02.708 "message": "Input/output error" 00:18:02.708 } 00:18:02.708 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:02.708 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:02.708 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:02.708 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:02.708 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:02.708 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.708 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.708 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.708 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:02.708 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.708 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.967 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.967 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.967 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:02.967 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.967 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:02.967 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.967 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:02.967 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.967 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.967 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.967 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.226 request: 00:18:03.226 { 00:18:03.226 "name": "nvme0", 00:18:03.226 "trtype": "tcp", 00:18:03.226 "traddr": "10.0.0.2", 00:18:03.226 "adrfam": "ipv4", 00:18:03.226 "trsvcid": "4420", 00:18:03.226 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:03.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:03.226 "prchk_reftag": false, 00:18:03.226 "prchk_guard": false, 00:18:03.226 "hdgst": false, 00:18:03.226 "ddgst": false, 00:18:03.226 "dhchap_key": "key1", 00:18:03.226 "dhchap_ctrlr_key": "ckey1", 00:18:03.226 "allow_unrecognized_csi": false, 00:18:03.226 "method": "bdev_nvme_attach_controller", 00:18:03.226 "req_id": 1 00:18:03.226 } 00:18:03.226 Got JSON-RPC error response 00:18:03.226 response: 00:18:03.226 { 00:18:03.226 "code": -5, 00:18:03.226 "message": "Input/output error" 00:18:03.226 } 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 402370 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 402370 ']' 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 402370 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 402370 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 402370' 00:18:03.226 killing process with pid 402370 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 402370 00:18:03.226 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 402370 00:18:03.485 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:03.485 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:03.485 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:03.485 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.485 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=424532 00:18:03.485 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:03.485 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 424532 00:18:03.485 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 424532 ']' 00:18:03.485 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.485 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:03.485 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.485 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:03.485 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.426 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:04.426 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:04.426 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:04.426 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:04.426 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.426 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.426 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:04.426 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 424532 00:18:04.426 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 424532 ']' 00:18:04.426 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.426 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:04.426 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.426 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:04.426 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.684 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.685 null0 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vzL 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.P8g ]] 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.P8g 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.6OT 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.685 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.685 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.685 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Lic ]] 00:18:04.685 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Lic 00:18:04.685 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.VYx 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.KbS ]] 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KbS 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lGn 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.944 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:05.512 nvme0n1 00:18:05.512 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.512 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.512 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.770 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.770 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.770 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.770 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.770 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.770 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.770 { 00:18:05.770 "cntlid": 1, 00:18:05.770 "qid": 0, 00:18:05.770 "state": "enabled", 00:18:05.770 "thread": "nvmf_tgt_poll_group_000", 00:18:05.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:05.770 "listen_address": { 00:18:05.770 "trtype": "TCP", 00:18:05.770 "adrfam": "IPv4", 00:18:05.770 "traddr": "10.0.0.2", 00:18:05.770 "trsvcid": "4420" 00:18:05.770 }, 00:18:05.770 "peer_address": { 00:18:05.770 "trtype": "TCP", 00:18:05.770 "adrfam": "IPv4", 00:18:05.770 "traddr": "10.0.0.1", 00:18:05.770 "trsvcid": "33812" 00:18:05.770 }, 00:18:05.770 "auth": { 00:18:05.770 "state": "completed", 00:18:05.770 "digest": "sha512", 00:18:05.770 "dhgroup": "ffdhe8192" 00:18:05.770 } 00:18:05.770 } 00:18:05.770 ]' 00:18:05.770 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.770 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.770 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.029 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:06.029 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.029 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.029 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.029 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.029 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:18:06.029 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:18:06.597 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.856 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:06.856 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.856 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.856 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.856 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:06.856 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.856 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.856 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.856 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:06.856 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:06.856 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:06.856 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:06.856 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:06.856 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:06.856 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.856 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:06.856 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.856 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:06.856 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.856 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.178 request: 00:18:07.178 { 00:18:07.178 "name": "nvme0", 00:18:07.178 "trtype": "tcp", 00:18:07.178 "traddr": "10.0.0.2", 00:18:07.178 "adrfam": "ipv4", 00:18:07.178 "trsvcid": "4420", 00:18:07.178 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:07.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:07.178 "prchk_reftag": false, 00:18:07.178 "prchk_guard": false, 00:18:07.178 "hdgst": false, 00:18:07.178 "ddgst": false, 00:18:07.178 "dhchap_key": "key3", 00:18:07.178 "allow_unrecognized_csi": false, 00:18:07.178 "method": "bdev_nvme_attach_controller", 00:18:07.178 "req_id": 1 00:18:07.178 } 00:18:07.178 Got JSON-RPC error response 00:18:07.178 response: 00:18:07.178 { 00:18:07.178 "code": -5, 00:18:07.178 "message": "Input/output error" 00:18:07.178 } 00:18:07.178 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:07.178 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:07.178 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:07.178 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:07.178 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:07.178 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:07.178 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:07.178 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.512 request: 00:18:07.512 { 00:18:07.512 "name": "nvme0", 00:18:07.512 "trtype": "tcp", 00:18:07.512 "traddr": "10.0.0.2", 00:18:07.512 "adrfam": "ipv4", 00:18:07.512 "trsvcid": "4420", 00:18:07.512 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:07.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:07.512 "prchk_reftag": false, 00:18:07.512 "prchk_guard": false, 00:18:07.512 "hdgst": false, 00:18:07.512 "ddgst": false, 00:18:07.512 "dhchap_key": "key3", 00:18:07.512 "allow_unrecognized_csi": false, 00:18:07.512 "method": "bdev_nvme_attach_controller", 00:18:07.512 "req_id": 1 00:18:07.512 } 00:18:07.512 Got JSON-RPC error response 00:18:07.512 response: 00:18:07.512 { 00:18:07.512 "code": -5, 00:18:07.512 "message": "Input/output error" 00:18:07.512 } 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:07.512 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:07.813 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:08.072 request: 00:18:08.072 { 00:18:08.072 "name": "nvme0", 00:18:08.072 "trtype": "tcp", 00:18:08.072 "traddr": "10.0.0.2", 00:18:08.072 "adrfam": "ipv4", 00:18:08.072 "trsvcid": "4420", 00:18:08.072 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:08.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:08.072 "prchk_reftag": false, 00:18:08.072 "prchk_guard": false, 00:18:08.072 "hdgst": false, 00:18:08.072 "ddgst": false, 00:18:08.072 "dhchap_key": "key0", 00:18:08.072 "dhchap_ctrlr_key": "key1", 00:18:08.072 "allow_unrecognized_csi": false, 00:18:08.072 "method": "bdev_nvme_attach_controller", 00:18:08.072 "req_id": 1 00:18:08.072 } 00:18:08.072 Got JSON-RPC error response 00:18:08.072 response: 00:18:08.072 { 00:18:08.072 "code": -5, 00:18:08.072 "message": "Input/output error" 00:18:08.072 } 00:18:08.072 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:08.072 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:08.072 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:08.072 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:08.072 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:08.072 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:08.072 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:08.331 nvme0n1 00:18:08.331 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:08.331 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:08.331 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.588 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.589 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.589 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.846 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:08.846 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.846 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.846 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.846 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:08.847 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:08.847 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:09.413 nvme0n1 00:18:09.671 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:09.672 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:09.672 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.672 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.672 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:09.672 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.672 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.672 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.672 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:09.672 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:09.672 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.101 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.101 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:18:10.101 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: --dhchap-ctrl-secret DHHC-1:03:ZGI0N2VjYjdjYjBkZGZlZGQ4M2FkYWMwOGZmMGJjY2Y0NzY4MWQ0MWJmZjAzNzVlNzM5ZmU1ZWU5OWYzZjNlOSRta8Y=: 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:10.668 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:11.235 request: 00:18:11.235 { 00:18:11.235 "name": "nvme0", 00:18:11.235 "trtype": "tcp", 00:18:11.235 "traddr": "10.0.0.2", 00:18:11.235 "adrfam": "ipv4", 00:18:11.235 "trsvcid": "4420", 00:18:11.235 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:11.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:11.235 "prchk_reftag": false, 00:18:11.235 "prchk_guard": false, 00:18:11.235 "hdgst": false, 00:18:11.235 "ddgst": false, 00:18:11.235 "dhchap_key": "key1", 00:18:11.235 "allow_unrecognized_csi": false, 00:18:11.235 "method": "bdev_nvme_attach_controller", 00:18:11.235 "req_id": 1 00:18:11.235 } 00:18:11.235 Got JSON-RPC error response 00:18:11.235 response: 00:18:11.235 { 00:18:11.235 "code": -5, 00:18:11.235 "message": "Input/output error" 00:18:11.235 } 00:18:11.235 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:11.235 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:11.235 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:11.235 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:11.235 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:11.235 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:11.235 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:11.801 nvme0n1 00:18:11.801 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:11.801 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:11.801 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.059 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.059 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.059 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.316 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:12.316 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.316 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.316 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.316 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:12.316 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:12.316 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:12.575 nvme0n1 00:18:12.575 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:12.575 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:12.575 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.833 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.833 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.833 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.092 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:13.092 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.092 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.092 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.092 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: '' 2s 00:18:13.092 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:13.092 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:13.092 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: 00:18:13.092 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:13.092 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:13.092 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:13.092 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: ]] 00:18:13.092 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:N2Q1YmRhOGM4NDdjOGI1MjZlMjIyZmI2ZDA4NmZkZmRO+RTG: 00:18:13.092 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:13.092 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:13.092 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: 2s 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:14.995 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: ]] 00:18:14.996 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDYwNmY2ZGM2Zjg5MjUxNThiNjExOGYwYTliY2FjYTk4NGRhNjg5NmMzMWViMTRiYDzURw==: 00:18:14.996 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:14.996 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:17.532 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:17.532 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:17.532 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:17.532 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:17.532 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:17.532 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:17.532 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:17.532 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.532 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:17.532 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.532 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.532 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.532 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:17.532 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:17.532 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:17.791 nvme0n1 00:18:17.791 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:17.791 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.791 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.791 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.791 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:17.791 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:18.361 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:18.361 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:18.361 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.620 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.620 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:18.620 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.620 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.620 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.620 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:18.620 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:18.620 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:18.620 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:18.620 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.879 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.879 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:18.879 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.879 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.879 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.879 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:18.879 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:18.879 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:18.879 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:18.879 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.880 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:18.880 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.880 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:18.880 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:19.448 request: 00:18:19.449 { 00:18:19.449 "name": "nvme0", 00:18:19.449 "dhchap_key": "key1", 00:18:19.449 "dhchap_ctrlr_key": "key3", 00:18:19.449 "method": "bdev_nvme_set_keys", 00:18:19.449 "req_id": 1 00:18:19.449 } 00:18:19.449 Got JSON-RPC error response 00:18:19.449 response: 00:18:19.449 { 00:18:19.449 "code": -13, 00:18:19.449 "message": "Permission denied" 00:18:19.449 } 00:18:19.449 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:19.449 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.449 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.449 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.449 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:19.449 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:19.449 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.449 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:19.449 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:20.833 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:20.833 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:20.833 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.833 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:20.833 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:20.833 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.833 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.834 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.834 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:20.834 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:20.834 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:21.402 nvme0n1 00:18:21.402 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:21.402 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.402 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.402 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.402 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:21.402 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:21.402 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:21.402 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:21.402 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.402 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:21.402 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.402 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:21.402 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:21.971 request: 00:18:21.971 { 00:18:21.971 "name": "nvme0", 00:18:21.971 "dhchap_key": "key2", 00:18:21.971 "dhchap_ctrlr_key": "key0", 00:18:21.971 "method": "bdev_nvme_set_keys", 00:18:21.971 "req_id": 1 00:18:21.971 } 00:18:21.971 Got JSON-RPC error response 00:18:21.971 response: 00:18:21.971 { 00:18:21.971 "code": -13, 00:18:21.971 "message": "Permission denied" 00:18:21.971 } 00:18:21.971 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:21.971 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.971 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.971 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.971 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:21.971 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:21.971 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.230 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:22.230 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:23.169 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:23.169 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:23.169 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.428 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:23.428 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:23.428 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:23.428 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 402613 00:18:23.428 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 402613 ']' 00:18:23.428 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 402613 00:18:23.428 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:23.428 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:23.428 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 402613 00:18:23.428 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:23.428 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:23.428 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 402613' 00:18:23.428 killing process with pid 402613 00:18:23.428 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 402613 00:18:23.428 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 402613 00:18:23.687 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:23.687 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:23.687 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:23.687 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:23.687 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:23.687 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:23.687 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:23.687 rmmod nvme_tcp 00:18:23.687 rmmod nvme_fabrics 00:18:23.946 rmmod nvme_keyring 00:18:23.946 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:23.946 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:23.946 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:23.946 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 424532 ']' 00:18:23.946 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 424532 00:18:23.946 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 424532 ']' 00:18:23.946 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 424532 00:18:23.946 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:23.946 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:23.946 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 424532 00:18:23.946 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:23.946 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:23.946 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 424532' 00:18:23.946 killing process with pid 424532 00:18:23.946 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 424532 00:18:23.946 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 424532 00:18:24.206 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:24.206 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:24.206 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:24.206 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:24.206 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:18:24.206 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:24.206 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:18:24.206 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:24.206 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:24.206 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.206 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.206 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.120 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:26.120 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.vzL /tmp/spdk.key-sha256.6OT /tmp/spdk.key-sha384.VYx /tmp/spdk.key-sha512.lGn /tmp/spdk.key-sha512.P8g /tmp/spdk.key-sha384.Lic /tmp/spdk.key-sha256.KbS '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:26.120 00:18:26.120 real 2m33.006s 00:18:26.120 user 5m52.052s 00:18:26.120 sys 0m24.279s 00:18:26.120 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:26.120 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.120 ************************************ 00:18:26.120 END TEST nvmf_auth_target 00:18:26.120 ************************************ 00:18:26.120 18:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:26.120 18:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:26.120 18:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:26.120 18:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:26.120 18:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:26.120 ************************************ 00:18:26.120 START TEST nvmf_bdevio_no_huge 00:18:26.120 ************************************ 00:18:26.120 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:26.381 * Looking for test storage... 00:18:26.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:26.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.381 --rc genhtml_branch_coverage=1 00:18:26.381 --rc genhtml_function_coverage=1 00:18:26.381 --rc genhtml_legend=1 00:18:26.381 --rc geninfo_all_blocks=1 00:18:26.381 --rc geninfo_unexecuted_blocks=1 00:18:26.381 00:18:26.381 ' 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:26.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.381 --rc genhtml_branch_coverage=1 00:18:26.381 --rc genhtml_function_coverage=1 00:18:26.381 --rc genhtml_legend=1 00:18:26.381 --rc geninfo_all_blocks=1 00:18:26.381 --rc geninfo_unexecuted_blocks=1 00:18:26.381 00:18:26.381 ' 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:26.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.381 --rc genhtml_branch_coverage=1 00:18:26.381 --rc genhtml_function_coverage=1 00:18:26.381 --rc genhtml_legend=1 00:18:26.381 --rc geninfo_all_blocks=1 00:18:26.381 --rc geninfo_unexecuted_blocks=1 00:18:26.381 00:18:26.381 ' 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:26.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.381 --rc genhtml_branch_coverage=1 00:18:26.381 --rc genhtml_function_coverage=1 00:18:26.381 --rc genhtml_legend=1 00:18:26.381 --rc geninfo_all_blocks=1 00:18:26.381 --rc geninfo_unexecuted_blocks=1 00:18:26.381 00:18:26.381 ' 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:26.381 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:26.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:26.382 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:32.953 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:32.953 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:32.954 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:32.954 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:32.954 Found net devices under 0000:86:00.0: cvl_0_0 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:32.954 Found net devices under 0000:86:00.1: cvl_0_1 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:32.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:18:32.954 00:18:32.954 --- 10.0.0.2 ping statistics --- 00:18:32.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.954 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:18:32.954 00:18:32.954 --- 10.0.0.1 ping statistics --- 00:18:32.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.954 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.954 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:32.955 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:32.955 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.955 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:32.955 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:32.955 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:32.955 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:32.955 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:32.955 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:32.955 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=431490 00:18:32.955 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 431490 00:18:32.955 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:32.955 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 431490 ']' 00:18:32.955 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.955 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:32.955 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.955 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:32.955 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:32.955 [2024-10-08 18:26:25.705983] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:18:32.955 [2024-10-08 18:26:25.706035] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:32.955 [2024-10-08 18:26:25.784309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:32.955 [2024-10-08 18:26:25.868852] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.955 [2024-10-08 18:26:25.868888] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.955 [2024-10-08 18:26:25.868894] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.955 [2024-10-08 18:26:25.868901] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.955 [2024-10-08 18:26:25.868906] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.955 [2024-10-08 18:26:25.870183] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:18:32.955 [2024-10-08 18:26:25.870291] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:18:32.955 [2024-10-08 18:26:25.870412] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.955 [2024-10-08 18:26:25.870413] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:18:33.214 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:33.214 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:33.474 [2024-10-08 18:26:26.579482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:33.474 Malloc0 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:33.474 [2024-10-08 18:26:26.623741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:33.474 { 00:18:33.474 "params": { 00:18:33.474 "name": "Nvme$subsystem", 00:18:33.474 "trtype": "$TEST_TRANSPORT", 00:18:33.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.474 "adrfam": "ipv4", 00:18:33.474 "trsvcid": "$NVMF_PORT", 00:18:33.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.474 "hdgst": ${hdgst:-false}, 00:18:33.474 "ddgst": ${ddgst:-false} 00:18:33.474 }, 00:18:33.474 "method": "bdev_nvme_attach_controller" 00:18:33.474 } 00:18:33.474 EOF 00:18:33.474 )") 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:18:33.474 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:33.474 "params": { 00:18:33.474 "name": "Nvme1", 00:18:33.474 "trtype": "tcp", 00:18:33.474 "traddr": "10.0.0.2", 00:18:33.474 "adrfam": "ipv4", 00:18:33.474 "trsvcid": "4420", 00:18:33.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.474 "hdgst": false, 00:18:33.474 "ddgst": false 00:18:33.474 }, 00:18:33.474 "method": "bdev_nvme_attach_controller" 00:18:33.474 }' 00:18:33.474 [2024-10-08 18:26:26.674869] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:18:33.474 [2024-10-08 18:26:26.674914] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid431736 ] 00:18:33.474 [2024-10-08 18:26:26.744520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:33.733 [2024-10-08 18:26:26.830478] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.733 [2024-10-08 18:26:26.830583] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.733 [2024-10-08 18:26:26.830583] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.992 I/O targets: 00:18:33.992 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:33.992 00:18:33.992 00:18:33.992 CUnit - A unit testing framework for C - Version 2.1-3 00:18:33.992 http://cunit.sourceforge.net/ 00:18:33.992 00:18:33.992 00:18:33.992 Suite: bdevio tests on: Nvme1n1 00:18:33.992 Test: blockdev write read block ...passed 00:18:33.992 Test: blockdev write zeroes read block ...passed 00:18:33.992 Test: blockdev write zeroes read no split ...passed 00:18:33.992 Test: blockdev write zeroes read split ...passed 00:18:34.251 Test: blockdev write zeroes read split partial ...passed 00:18:34.251 Test: blockdev reset ...[2024-10-08 18:26:27.319738] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.251 [2024-10-08 18:26:27.319800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2427a20 (9): Bad file descriptor 00:18:34.251 [2024-10-08 18:26:27.332197] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:34.251 passed 00:18:34.251 Test: blockdev write read 8 blocks ...passed 00:18:34.251 Test: blockdev write read size > 128k ...passed 00:18:34.251 Test: blockdev write read invalid size ...passed 00:18:34.251 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:34.251 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:34.251 Test: blockdev write read max offset ...passed 00:18:34.251 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:34.251 Test: blockdev writev readv 8 blocks ...passed 00:18:34.251 Test: blockdev writev readv 30 x 1block ...passed 00:18:34.511 Test: blockdev writev readv block ...passed 00:18:34.511 Test: blockdev writev readv size > 128k ...passed 00:18:34.511 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:34.511 Test: blockdev comparev and writev ...[2024-10-08 18:26:27.586251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:34.511 [2024-10-08 18:26:27.586281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.511 [2024-10-08 18:26:27.586295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:34.511 [2024-10-08 18:26:27.586302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:34.511 [2024-10-08 18:26:27.586534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:34.511 [2024-10-08 18:26:27.586544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:34.511 [2024-10-08 18:26:27.586556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:34.511 [2024-10-08 18:26:27.586563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:34.511 [2024-10-08 18:26:27.586787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:34.511 [2024-10-08 18:26:27.586797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:34.511 [2024-10-08 18:26:27.586809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:34.511 [2024-10-08 18:26:27.586816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:34.511 [2024-10-08 18:26:27.587054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:34.511 [2024-10-08 18:26:27.587064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:34.511 [2024-10-08 18:26:27.587075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:34.511 [2024-10-08 18:26:27.587082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:34.511 passed 00:18:34.511 Test: blockdev nvme passthru rw ...passed 00:18:34.511 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:26:27.668699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:34.511 [2024-10-08 18:26:27.668716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:34.511 [2024-10-08 18:26:27.668823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:34.511 [2024-10-08 18:26:27.668832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:34.511 [2024-10-08 18:26:27.668938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:34.511 [2024-10-08 18:26:27.668947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:34.511 [2024-10-08 18:26:27.669051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:34.511 [2024-10-08 18:26:27.669060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:34.511 passed 00:18:34.511 Test: blockdev nvme admin passthru ...passed 00:18:34.511 Test: blockdev copy ...passed 00:18:34.511 00:18:34.511 Run Summary: Type Total Ran Passed Failed Inactive 00:18:34.511 suites 1 1 n/a 0 0 00:18:34.511 tests 23 23 23 0 0 00:18:34.511 asserts 152 152 152 0 n/a 00:18:34.511 00:18:34.511 Elapsed time = 1.146 seconds 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:34.770 rmmod nvme_tcp 00:18:34.770 rmmod nvme_fabrics 00:18:34.770 rmmod nvme_keyring 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 431490 ']' 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 431490 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 431490 ']' 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 431490 00:18:34.770 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:35.029 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:35.029 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 431490 00:18:35.029 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:35.029 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:35.029 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 431490' 00:18:35.029 killing process with pid 431490 00:18:35.029 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 431490 00:18:35.029 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 431490 00:18:35.287 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:35.287 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:35.287 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:35.287 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:35.287 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:35.287 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:18:35.287 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:18:35.287 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:35.287 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:35.287 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.287 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.287 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:37.826 00:18:37.826 real 0m11.122s 00:18:37.826 user 0m14.458s 00:18:37.826 sys 0m5.466s 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.826 ************************************ 00:18:37.826 END TEST nvmf_bdevio_no_huge 00:18:37.826 ************************************ 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:37.826 ************************************ 00:18:37.826 START TEST nvmf_tls 00:18:37.826 ************************************ 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:37.826 * Looking for test storage... 00:18:37.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:37.826 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:37.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.827 --rc genhtml_branch_coverage=1 00:18:37.827 --rc genhtml_function_coverage=1 00:18:37.827 --rc genhtml_legend=1 00:18:37.827 --rc geninfo_all_blocks=1 00:18:37.827 --rc geninfo_unexecuted_blocks=1 00:18:37.827 00:18:37.827 ' 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:37.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.827 --rc genhtml_branch_coverage=1 00:18:37.827 --rc genhtml_function_coverage=1 00:18:37.827 --rc genhtml_legend=1 00:18:37.827 --rc geninfo_all_blocks=1 00:18:37.827 --rc geninfo_unexecuted_blocks=1 00:18:37.827 00:18:37.827 ' 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:37.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.827 --rc genhtml_branch_coverage=1 00:18:37.827 --rc genhtml_function_coverage=1 00:18:37.827 --rc genhtml_legend=1 00:18:37.827 --rc geninfo_all_blocks=1 00:18:37.827 --rc geninfo_unexecuted_blocks=1 00:18:37.827 00:18:37.827 ' 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:37.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.827 --rc genhtml_branch_coverage=1 00:18:37.827 --rc genhtml_function_coverage=1 00:18:37.827 --rc genhtml_legend=1 00:18:37.827 --rc geninfo_all_blocks=1 00:18:37.827 --rc geninfo_unexecuted_blocks=1 00:18:37.827 00:18:37.827 ' 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:37.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:37.827 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:44.403 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:44.403 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.403 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:44.404 Found net devices under 0000:86:00.0: cvl_0_0 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:44.404 Found net devices under 0000:86:00.1: cvl_0_1 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:44.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:18:44.404 00:18:44.404 --- 10.0.0.2 ping statistics --- 00:18:44.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.404 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:44.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:18:44.404 00:18:44.404 --- 10.0.0.1 ping statistics --- 00:18:44.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.404 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=435503 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 435503 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 435503 ']' 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.404 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.404 [2024-10-08 18:26:36.878698] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:18:44.404 [2024-10-08 18:26:36.878742] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.404 [2024-10-08 18:26:36.952952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.404 [2024-10-08 18:26:37.032073] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.404 [2024-10-08 18:26:37.032108] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.404 [2024-10-08 18:26:37.032115] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.404 [2024-10-08 18:26:37.032121] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.404 [2024-10-08 18:26:37.032127] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.404 [2024-10-08 18:26:37.032550] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.404 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.404 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:44.404 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:44.404 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.404 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.663 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.663 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:44.663 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:44.663 true 00:18:44.663 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:44.664 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:44.923 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:44.923 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:44.923 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:45.182 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:45.182 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:45.182 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:45.182 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:45.182 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:45.440 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:45.440 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:45.699 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:45.699 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:45.699 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:45.699 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:45.958 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:45.958 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:45.958 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:45.958 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:45.958 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:46.217 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:46.217 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:46.217 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:46.476 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:46.476 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.qL6HjIgqAA 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.GnEcw9Uwa2 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.qL6HjIgqAA 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.GnEcw9Uwa2 00:18:46.735 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:46.994 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:47.252 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.qL6HjIgqAA 00:18:47.252 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qL6HjIgqAA 00:18:47.252 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:47.252 [2024-10-08 18:26:40.538927] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.252 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:47.511 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:47.770 [2024-10-08 18:26:40.903884] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:47.770 [2024-10-08 18:26:40.904089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.770 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:48.029 malloc0 00:18:48.029 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:48.029 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qL6HjIgqAA 00:18:48.289 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:48.548 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.qL6HjIgqAA 00:18:58.531 Initializing NVMe Controllers 00:18:58.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:58.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:58.531 Initialization complete. Launching workers. 00:18:58.531 ======================================================== 00:18:58.531 Latency(us) 00:18:58.531 Device Information : IOPS MiB/s Average min max 00:18:58.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16911.39 66.06 3784.48 1108.20 5606.40 00:18:58.531 ======================================================== 00:18:58.531 Total : 16911.39 66.06 3784.48 1108.20 5606.40 00:18:58.531 00:18:58.531 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qL6HjIgqAA 00:18:58.531 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:58.531 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:58.531 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:58.531 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qL6HjIgqAA 00:18:58.531 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:58.531 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=437919 00:18:58.531 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:58.532 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:58.532 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 437919 /var/tmp/bdevperf.sock 00:18:58.532 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 437919 ']' 00:18:58.532 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.532 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:58.532 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.532 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:58.532 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.532 [2024-10-08 18:26:51.848767] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:18:58.532 [2024-10-08 18:26:51.848820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437919 ] 00:18:58.790 [2024-10-08 18:26:51.917498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.790 [2024-10-08 18:26:51.995598] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.727 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:59.727 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:59.727 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qL6HjIgqAA 00:18:59.727 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:59.727 [2024-10-08 18:26:53.024213] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.985 TLSTESTn1 00:18:59.985 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:59.985 Running I/O for 10 seconds... 00:19:02.332 5378.00 IOPS, 21.01 MiB/s [2024-10-08T16:26:56.288Z] 5508.50 IOPS, 21.52 MiB/s [2024-10-08T16:26:57.250Z] 5468.67 IOPS, 21.36 MiB/s [2024-10-08T16:26:58.627Z] 5474.50 IOPS, 21.38 MiB/s [2024-10-08T16:26:59.562Z] 5503.20 IOPS, 21.50 MiB/s [2024-10-08T16:27:00.499Z] 5505.00 IOPS, 21.50 MiB/s [2024-10-08T16:27:01.437Z] 5516.57 IOPS, 21.55 MiB/s [2024-10-08T16:27:02.373Z] 5530.12 IOPS, 21.60 MiB/s [2024-10-08T16:27:03.309Z] 5467.78 IOPS, 21.36 MiB/s [2024-10-08T16:27:03.309Z] 5420.70 IOPS, 21.17 MiB/s 00:19:09.986 Latency(us) 00:19:09.986 [2024-10-08T16:27:03.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.986 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:09.986 Verification LBA range: start 0x0 length 0x2000 00:19:09.986 TLSTESTn1 : 10.02 5423.59 21.19 0.00 0.00 23565.11 4962.01 41443.72 00:19:09.986 [2024-10-08T16:27:03.309Z] =================================================================================================================== 00:19:09.986 [2024-10-08T16:27:03.309Z] Total : 5423.59 21.19 0.00 0.00 23565.11 4962.01 41443.72 00:19:09.986 { 00:19:09.986 "results": [ 00:19:09.986 { 00:19:09.986 "job": "TLSTESTn1", 00:19:09.986 "core_mask": "0x4", 00:19:09.986 "workload": "verify", 00:19:09.986 "status": "finished", 00:19:09.986 "verify_range": { 00:19:09.986 "start": 0, 00:19:09.986 "length": 8192 00:19:09.986 }, 00:19:09.986 "queue_depth": 128, 00:19:09.986 "io_size": 4096, 00:19:09.986 "runtime": 10.018088, 00:19:09.986 "iops": 5423.589810750314, 00:19:09.986 "mibps": 21.185897698243416, 00:19:09.986 "io_failed": 0, 00:19:09.986 "io_timeout": 0, 00:19:09.986 "avg_latency_us": 23565.108690480574, 00:19:09.986 "min_latency_us": 4962.011428571429, 00:19:09.986 "max_latency_us": 41443.718095238095 00:19:09.986 } 00:19:09.986 ], 00:19:09.986 "core_count": 1 00:19:09.986 } 00:19:09.986 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:09.986 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 437919 00:19:09.986 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 437919 ']' 00:19:09.986 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 437919 00:19:09.986 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:09.986 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:09.986 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 437919 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 437919' 00:19:10.246 killing process with pid 437919 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 437919 00:19:10.246 Received shutdown signal, test time was about 10.000000 seconds 00:19:10.246 00:19:10.246 Latency(us) 00:19:10.246 [2024-10-08T16:27:03.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.246 [2024-10-08T16:27:03.569Z] =================================================================================================================== 00:19:10.246 [2024-10-08T16:27:03.569Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 437919 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GnEcw9Uwa2 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GnEcw9Uwa2 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GnEcw9Uwa2 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GnEcw9Uwa2 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=440042 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 440042 /var/tmp/bdevperf.sock 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 440042 ']' 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:10.246 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.246 [2024-10-08 18:27:03.558021] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:10.246 [2024-10-08 18:27:03.558069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440042 ] 00:19:10.505 [2024-10-08 18:27:03.627345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.505 [2024-10-08 18:27:03.694237] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.441 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:11.441 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:11.441 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GnEcw9Uwa2 00:19:11.441 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:11.699 [2024-10-08 18:27:04.781303] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:11.699 [2024-10-08 18:27:04.790191] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:11.699 [2024-10-08 18:27:04.790788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a8230 (107): Transport endpoint is not connected 00:19:11.699 [2024-10-08 18:27:04.791781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a8230 (9): Bad file descriptor 00:19:11.699 [2024-10-08 18:27:04.792782] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:11.699 [2024-10-08 18:27:04.792792] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:11.699 [2024-10-08 18:27:04.792800] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:11.699 [2024-10-08 18:27:04.792808] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:11.699 request: 00:19:11.699 { 00:19:11.699 "name": "TLSTEST", 00:19:11.699 "trtype": "tcp", 00:19:11.699 "traddr": "10.0.0.2", 00:19:11.699 "adrfam": "ipv4", 00:19:11.699 "trsvcid": "4420", 00:19:11.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:11.699 "prchk_reftag": false, 00:19:11.699 "prchk_guard": false, 00:19:11.699 "hdgst": false, 00:19:11.699 "ddgst": false, 00:19:11.699 "psk": "key0", 00:19:11.699 "allow_unrecognized_csi": false, 00:19:11.699 "method": "bdev_nvme_attach_controller", 00:19:11.699 "req_id": 1 00:19:11.699 } 00:19:11.699 Got JSON-RPC error response 00:19:11.699 response: 00:19:11.699 { 00:19:11.699 "code": -5, 00:19:11.699 "message": "Input/output error" 00:19:11.699 } 00:19:11.699 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 440042 00:19:11.699 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 440042 ']' 00:19:11.699 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 440042 00:19:11.699 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:11.699 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:11.700 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 440042 00:19:11.700 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:11.700 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:11.700 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 440042' 00:19:11.700 killing process with pid 440042 00:19:11.700 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 440042 00:19:11.700 Received shutdown signal, test time was about 10.000000 seconds 00:19:11.700 00:19:11.700 Latency(us) 00:19:11.700 [2024-10-08T16:27:05.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.700 [2024-10-08T16:27:05.023Z] =================================================================================================================== 00:19:11.700 [2024-10-08T16:27:05.023Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:11.700 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 440042 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qL6HjIgqAA 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qL6HjIgqAA 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qL6HjIgqAA 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qL6HjIgqAA 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=440286 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 440286 /var/tmp/bdevperf.sock 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 440286 ']' 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:11.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:11.959 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.959 [2024-10-08 18:27:05.104671] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:11.959 [2024-10-08 18:27:05.104719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440286 ] 00:19:11.959 [2024-10-08 18:27:05.173417] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.959 [2024-10-08 18:27:05.239919] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.897 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:12.897 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:12.897 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qL6HjIgqAA 00:19:12.897 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:13.156 [2024-10-08 18:27:06.324138] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.157 [2024-10-08 18:27:06.334021] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:13.157 [2024-10-08 18:27:06.334044] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:13.157 [2024-10-08 18:27:06.334069] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:13.157 [2024-10-08 18:27:06.334528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e7230 (107): Transport endpoint is not connected 00:19:13.157 [2024-10-08 18:27:06.335521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e7230 (9): Bad file descriptor 00:19:13.157 [2024-10-08 18:27:06.336523] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:13.157 [2024-10-08 18:27:06.336534] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:13.157 [2024-10-08 18:27:06.336542] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:13.157 [2024-10-08 18:27:06.336550] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:13.157 request: 00:19:13.157 { 00:19:13.157 "name": "TLSTEST", 00:19:13.157 "trtype": "tcp", 00:19:13.157 "traddr": "10.0.0.2", 00:19:13.157 "adrfam": "ipv4", 00:19:13.157 "trsvcid": "4420", 00:19:13.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.157 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:13.157 "prchk_reftag": false, 00:19:13.157 "prchk_guard": false, 00:19:13.157 "hdgst": false, 00:19:13.157 "ddgst": false, 00:19:13.157 "psk": "key0", 00:19:13.157 "allow_unrecognized_csi": false, 00:19:13.157 "method": "bdev_nvme_attach_controller", 00:19:13.157 "req_id": 1 00:19:13.157 } 00:19:13.157 Got JSON-RPC error response 00:19:13.157 response: 00:19:13.157 { 00:19:13.157 "code": -5, 00:19:13.157 "message": "Input/output error" 00:19:13.157 } 00:19:13.157 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 440286 00:19:13.157 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 440286 ']' 00:19:13.157 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 440286 00:19:13.157 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:13.157 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.157 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 440286 00:19:13.157 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:13.157 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:13.157 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 440286' 00:19:13.157 killing process with pid 440286 00:19:13.157 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 440286 00:19:13.157 Received shutdown signal, test time was about 10.000000 seconds 00:19:13.157 00:19:13.157 Latency(us) 00:19:13.157 [2024-10-08T16:27:06.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.157 [2024-10-08T16:27:06.480Z] =================================================================================================================== 00:19:13.157 [2024-10-08T16:27:06.480Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:13.157 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 440286 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qL6HjIgqAA 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qL6HjIgqAA 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qL6HjIgqAA 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qL6HjIgqAA 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=440527 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 440527 /var/tmp/bdevperf.sock 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 440527 ']' 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:13.416 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.416 [2024-10-08 18:27:06.652574] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:13.416 [2024-10-08 18:27:06.652621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440527 ] 00:19:13.417 [2024-10-08 18:27:06.721728] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.675 [2024-10-08 18:27:06.791290] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.243 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:14.243 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:14.243 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qL6HjIgqAA 00:19:14.502 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:14.761 [2024-10-08 18:27:07.891815] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.761 [2024-10-08 18:27:07.897991] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:14.761 [2024-10-08 18:27:07.898013] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:14.761 [2024-10-08 18:27:07.898039] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:14.761 [2024-10-08 18:27:07.898206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2357230 (107): Transport endpoint is not connected 00:19:14.761 [2024-10-08 18:27:07.899199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2357230 (9): Bad file descriptor 00:19:14.761 [2024-10-08 18:27:07.900200] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:14.761 [2024-10-08 18:27:07.900214] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:14.761 [2024-10-08 18:27:07.900222] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:14.761 [2024-10-08 18:27:07.900230] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:14.761 request: 00:19:14.761 { 00:19:14.761 "name": "TLSTEST", 00:19:14.761 "trtype": "tcp", 00:19:14.761 "traddr": "10.0.0.2", 00:19:14.761 "adrfam": "ipv4", 00:19:14.761 "trsvcid": "4420", 00:19:14.761 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:14.761 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.761 "prchk_reftag": false, 00:19:14.761 "prchk_guard": false, 00:19:14.761 "hdgst": false, 00:19:14.761 "ddgst": false, 00:19:14.761 "psk": "key0", 00:19:14.761 "allow_unrecognized_csi": false, 00:19:14.761 "method": "bdev_nvme_attach_controller", 00:19:14.761 "req_id": 1 00:19:14.761 } 00:19:14.761 Got JSON-RPC error response 00:19:14.761 response: 00:19:14.761 { 00:19:14.762 "code": -5, 00:19:14.762 "message": "Input/output error" 00:19:14.762 } 00:19:14.762 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 440527 00:19:14.762 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 440527 ']' 00:19:14.762 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 440527 00:19:14.762 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:14.762 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:14.762 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 440527 00:19:14.762 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:14.762 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:14.762 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 440527' 00:19:14.762 killing process with pid 440527 00:19:14.762 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 440527 00:19:14.762 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.762 00:19:14.762 Latency(us) 00:19:14.762 [2024-10-08T16:27:08.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.762 [2024-10-08T16:27:08.085Z] =================================================================================================================== 00:19:14.762 [2024-10-08T16:27:08.085Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:14.762 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 440527 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=440772 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 440772 /var/tmp/bdevperf.sock 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 440772 ']' 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:15.021 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.021 [2024-10-08 18:27:08.192515] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:15.021 [2024-10-08 18:27:08.192562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440772 ] 00:19:15.021 [2024-10-08 18:27:08.260598] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.021 [2024-10-08 18:27:08.327424] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.279 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:15.279 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:15.279 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:15.279 [2024-10-08 18:27:08.600438] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:15.279 [2024-10-08 18:27:08.600474] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:15.538 request: 00:19:15.538 { 00:19:15.538 "name": "key0", 00:19:15.538 "path": "", 00:19:15.538 "method": "keyring_file_add_key", 00:19:15.538 "req_id": 1 00:19:15.538 } 00:19:15.538 Got JSON-RPC error response 00:19:15.538 response: 00:19:15.538 { 00:19:15.538 "code": -1, 00:19:15.538 "message": "Operation not permitted" 00:19:15.538 } 00:19:15.538 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:15.538 [2024-10-08 18:27:08.805057] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.538 [2024-10-08 18:27:08.805089] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:15.538 request: 00:19:15.538 { 00:19:15.538 "name": "TLSTEST", 00:19:15.538 "trtype": "tcp", 00:19:15.538 "traddr": "10.0.0.2", 00:19:15.538 "adrfam": "ipv4", 00:19:15.538 "trsvcid": "4420", 00:19:15.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:15.538 "prchk_reftag": false, 00:19:15.538 "prchk_guard": false, 00:19:15.538 "hdgst": false, 00:19:15.538 "ddgst": false, 00:19:15.538 "psk": "key0", 00:19:15.538 "allow_unrecognized_csi": false, 00:19:15.538 "method": "bdev_nvme_attach_controller", 00:19:15.538 "req_id": 1 00:19:15.538 } 00:19:15.538 Got JSON-RPC error response 00:19:15.538 response: 00:19:15.538 { 00:19:15.538 "code": -126, 00:19:15.538 "message": "Required key not available" 00:19:15.538 } 00:19:15.538 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 440772 00:19:15.538 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 440772 ']' 00:19:15.538 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 440772 00:19:15.538 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:15.538 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:15.538 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 440772 00:19:15.797 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:15.797 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:15.797 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 440772' 00:19:15.797 killing process with pid 440772 00:19:15.797 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 440772 00:19:15.797 Received shutdown signal, test time was about 10.000000 seconds 00:19:15.797 00:19:15.797 Latency(us) 00:19:15.797 [2024-10-08T16:27:09.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.797 [2024-10-08T16:27:09.120Z] =================================================================================================================== 00:19:15.797 [2024-10-08T16:27:09.120Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:15.797 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 440772 00:19:15.797 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:15.797 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:15.797 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:15.797 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:15.797 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:15.797 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 435503 00:19:15.797 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 435503 ']' 00:19:15.797 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 435503 00:19:15.798 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:15.798 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:15.798 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 435503 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 435503' 00:19:16.057 killing process with pid 435503 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 435503 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 435503 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.oAhvjHIcuA 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.oAhvjHIcuA 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=441286 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 441286 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 441286 ']' 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.057 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:16.316 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.316 [2024-10-08 18:27:09.419255] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:16.316 [2024-10-08 18:27:09.419304] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.316 [2024-10-08 18:27:09.492922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.316 [2024-10-08 18:27:09.569968] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.316 [2024-10-08 18:27:09.570005] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.316 [2024-10-08 18:27:09.570012] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.317 [2024-10-08 18:27:09.570017] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.317 [2024-10-08 18:27:09.570022] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.317 [2024-10-08 18:27:09.570584] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.253 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:17.253 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:17.253 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:17.253 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:17.253 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.253 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.253 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.oAhvjHIcuA 00:19:17.253 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oAhvjHIcuA 00:19:17.253 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:17.253 [2024-10-08 18:27:10.469890] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.253 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:17.512 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:17.771 [2024-10-08 18:27:10.858872] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:17.771 [2024-10-08 18:27:10.859074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.771 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:17.771 malloc0 00:19:18.030 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:18.030 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oAhvjHIcuA 00:19:18.289 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:18.547 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oAhvjHIcuA 00:19:18.547 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:18.547 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:18.547 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:18.547 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.oAhvjHIcuA 00:19:18.547 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:18.547 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=441761 00:19:18.547 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:18.547 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:18.547 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 441761 /var/tmp/bdevperf.sock 00:19:18.547 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 441761 ']' 00:19:18.547 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.547 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.547 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.548 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.548 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.548 [2024-10-08 18:27:11.706181] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:18.548 [2024-10-08 18:27:11.706231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441761 ] 00:19:18.548 [2024-10-08 18:27:11.771065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.548 [2024-10-08 18:27:11.842543] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.485 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.485 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:19.485 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oAhvjHIcuA 00:19:19.485 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:19.744 [2024-10-08 18:27:12.882426] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:19.744 TLSTESTn1 00:19:19.744 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:20.003 Running I/O for 10 seconds... 00:19:21.914 5275.00 IOPS, 20.61 MiB/s [2024-10-08T16:27:16.174Z] 5389.50 IOPS, 21.05 MiB/s [2024-10-08T16:27:17.110Z] 5461.67 IOPS, 21.33 MiB/s [2024-10-08T16:27:18.493Z] 5466.50 IOPS, 21.35 MiB/s [2024-10-08T16:27:19.430Z] 5494.00 IOPS, 21.46 MiB/s [2024-10-08T16:27:20.366Z] 5506.33 IOPS, 21.51 MiB/s [2024-10-08T16:27:21.304Z] 5518.86 IOPS, 21.56 MiB/s [2024-10-08T16:27:22.240Z] 5534.62 IOPS, 21.62 MiB/s [2024-10-08T16:27:23.177Z] 5543.78 IOPS, 21.66 MiB/s [2024-10-08T16:27:23.177Z] 5554.90 IOPS, 21.70 MiB/s 00:19:29.854 Latency(us) 00:19:29.854 [2024-10-08T16:27:23.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.854 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:29.854 Verification LBA range: start 0x0 length 0x2000 00:19:29.854 TLSTESTn1 : 10.02 5558.15 21.71 0.00 0.00 22993.90 4681.14 30833.13 00:19:29.854 [2024-10-08T16:27:23.177Z] =================================================================================================================== 00:19:29.854 [2024-10-08T16:27:23.177Z] Total : 5558.15 21.71 0.00 0.00 22993.90 4681.14 30833.13 00:19:29.854 { 00:19:29.854 "results": [ 00:19:29.854 { 00:19:29.854 "job": "TLSTESTn1", 00:19:29.854 "core_mask": "0x4", 00:19:29.854 "workload": "verify", 00:19:29.854 "status": "finished", 00:19:29.854 "verify_range": { 00:19:29.854 "start": 0, 00:19:29.854 "length": 8192 00:19:29.854 }, 00:19:29.854 "queue_depth": 128, 00:19:29.854 "io_size": 4096, 00:19:29.854 "runtime": 10.017001, 00:19:29.854 "iops": 5558.150588185026, 00:19:29.854 "mibps": 21.71152573509776, 00:19:29.854 "io_failed": 0, 00:19:29.854 "io_timeout": 0, 00:19:29.854 "avg_latency_us": 22993.901064868507, 00:19:29.854 "min_latency_us": 4681.142857142857, 00:19:29.854 "max_latency_us": 30833.12761904762 00:19:29.854 } 00:19:29.854 ], 00:19:29.854 "core_count": 1 00:19:29.854 } 00:19:29.854 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:29.854 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 441761 00:19:29.854 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 441761 ']' 00:19:29.854 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 441761 00:19:29.854 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:29.854 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:29.854 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 441761 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 441761' 00:19:30.114 killing process with pid 441761 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 441761 00:19:30.114 Received shutdown signal, test time was about 10.000000 seconds 00:19:30.114 00:19:30.114 Latency(us) 00:19:30.114 [2024-10-08T16:27:23.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.114 [2024-10-08T16:27:23.437Z] =================================================================================================================== 00:19:30.114 [2024-10-08T16:27:23.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 441761 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.oAhvjHIcuA 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oAhvjHIcuA 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oAhvjHIcuA 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oAhvjHIcuA 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.oAhvjHIcuA 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=443725 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 443725 /var/tmp/bdevperf.sock 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 443725 ']' 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:30.114 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.374 [2024-10-08 18:27:23.443151] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:30.374 [2024-10-08 18:27:23.443197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid443725 ] 00:19:30.374 [2024-10-08 18:27:23.505692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.374 [2024-10-08 18:27:23.572180] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.310 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:31.310 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:31.310 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oAhvjHIcuA 00:19:31.310 [2024-10-08 18:27:24.435380] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.oAhvjHIcuA': 0100666 00:19:31.310 [2024-10-08 18:27:24.435411] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:31.310 request: 00:19:31.310 { 00:19:31.310 "name": "key0", 00:19:31.310 "path": "/tmp/tmp.oAhvjHIcuA", 00:19:31.310 "method": "keyring_file_add_key", 00:19:31.310 "req_id": 1 00:19:31.310 } 00:19:31.310 Got JSON-RPC error response 00:19:31.310 response: 00:19:31.310 { 00:19:31.310 "code": -1, 00:19:31.310 "message": "Operation not permitted" 00:19:31.310 } 00:19:31.310 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:31.569 [2024-10-08 18:27:24.635977] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.569 [2024-10-08 18:27:24.636009] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:31.569 request: 00:19:31.569 { 00:19:31.569 "name": "TLSTEST", 00:19:31.569 "trtype": "tcp", 00:19:31.569 "traddr": "10.0.0.2", 00:19:31.569 "adrfam": "ipv4", 00:19:31.569 "trsvcid": "4420", 00:19:31.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.569 "prchk_reftag": false, 00:19:31.569 "prchk_guard": false, 00:19:31.569 "hdgst": false, 00:19:31.569 "ddgst": false, 00:19:31.569 "psk": "key0", 00:19:31.569 "allow_unrecognized_csi": false, 00:19:31.569 "method": "bdev_nvme_attach_controller", 00:19:31.569 "req_id": 1 00:19:31.569 } 00:19:31.569 Got JSON-RPC error response 00:19:31.569 response: 00:19:31.569 { 00:19:31.569 "code": -126, 00:19:31.569 "message": "Required key not available" 00:19:31.569 } 00:19:31.569 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 443725 00:19:31.569 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 443725 ']' 00:19:31.569 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 443725 00:19:31.569 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:31.569 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.569 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 443725 00:19:31.569 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:31.569 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:31.569 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 443725' 00:19:31.569 killing process with pid 443725 00:19:31.569 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 443725 00:19:31.569 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.569 00:19:31.569 Latency(us) 00:19:31.569 [2024-10-08T16:27:24.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.569 [2024-10-08T16:27:24.892Z] =================================================================================================================== 00:19:31.569 [2024-10-08T16:27:24.892Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:31.569 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 443725 00:19:31.828 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:31.828 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:31.828 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:31.828 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:31.828 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:31.828 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 441286 00:19:31.828 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 441286 ']' 00:19:31.828 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 441286 00:19:31.828 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:31.828 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.828 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 441286 00:19:31.828 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:31.828 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:31.828 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 441286' 00:19:31.828 killing process with pid 441286 00:19:31.828 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 441286 00:19:31.828 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 441286 00:19:31.828 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:31.828 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:31.828 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:31.828 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.086 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=443973 00:19:32.087 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:32.087 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 443973 00:19:32.087 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 443973 ']' 00:19:32.087 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.087 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.087 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.087 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.087 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.087 [2024-10-08 18:27:25.194289] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:32.087 [2024-10-08 18:27:25.194334] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.087 [2024-10-08 18:27:25.263926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.087 [2024-10-08 18:27:25.329539] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.087 [2024-10-08 18:27:25.329582] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.087 [2024-10-08 18:27:25.329589] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.087 [2024-10-08 18:27:25.329595] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.087 [2024-10-08 18:27:25.329600] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.087 [2024-10-08 18:27:25.330111] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.023 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:33.023 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:33.023 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:33.023 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:33.023 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.023 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.023 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.oAhvjHIcuA 00:19:33.023 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:33.023 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.oAhvjHIcuA 00:19:33.023 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:33.023 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.023 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:33.023 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.023 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.oAhvjHIcuA 00:19:33.023 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oAhvjHIcuA 00:19:33.023 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:33.023 [2024-10-08 18:27:26.242020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.023 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:33.282 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:33.541 [2024-10-08 18:27:26.627026] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:33.541 [2024-10-08 18:27:26.627239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.541 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:33.541 malloc0 00:19:33.801 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:33.801 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oAhvjHIcuA 00:19:34.060 [2024-10-08 18:27:27.231257] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.oAhvjHIcuA': 0100666 00:19:34.060 [2024-10-08 18:27:27.231284] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:34.060 request: 00:19:34.060 { 00:19:34.060 "name": "key0", 00:19:34.060 "path": "/tmp/tmp.oAhvjHIcuA", 00:19:34.060 "method": "keyring_file_add_key", 00:19:34.060 "req_id": 1 00:19:34.061 } 00:19:34.061 Got JSON-RPC error response 00:19:34.061 response: 00:19:34.061 { 00:19:34.061 "code": -1, 00:19:34.061 "message": "Operation not permitted" 00:19:34.061 } 00:19:34.061 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:34.320 [2024-10-08 18:27:27.439839] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:34.320 [2024-10-08 18:27:27.439870] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:34.320 request: 00:19:34.320 { 00:19:34.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.320 "host": "nqn.2016-06.io.spdk:host1", 00:19:34.320 "psk": "key0", 00:19:34.320 "method": "nvmf_subsystem_add_host", 00:19:34.320 "req_id": 1 00:19:34.320 } 00:19:34.320 Got JSON-RPC error response 00:19:34.320 response: 00:19:34.320 { 00:19:34.320 "code": -32603, 00:19:34.320 "message": "Internal error" 00:19:34.320 } 00:19:34.320 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:34.320 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:34.320 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:34.320 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:34.320 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 443973 00:19:34.320 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 443973 ']' 00:19:34.320 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 443973 00:19:34.320 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:34.320 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:34.320 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 443973 00:19:34.320 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:34.320 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:34.320 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 443973' 00:19:34.320 killing process with pid 443973 00:19:34.320 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 443973 00:19:34.320 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 443973 00:19:34.578 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.oAhvjHIcuA 00:19:34.578 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:34.578 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:34.578 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.579 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.579 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=444467 00:19:34.579 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 444467 00:19:34.579 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:34.579 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 444467 ']' 00:19:34.579 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.579 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.579 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.579 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.579 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.579 [2024-10-08 18:27:27.791078] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:34.579 [2024-10-08 18:27:27.791125] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.579 [2024-10-08 18:27:27.860508] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.839 [2024-10-08 18:27:27.936529] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.839 [2024-10-08 18:27:27.936579] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.839 [2024-10-08 18:27:27.936585] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.839 [2024-10-08 18:27:27.936591] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.839 [2024-10-08 18:27:27.936596] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.839 [2024-10-08 18:27:27.937173] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.408 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.408 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:35.408 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:35.408 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.408 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.408 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.408 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.oAhvjHIcuA 00:19:35.408 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oAhvjHIcuA 00:19:35.408 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:35.666 [2024-10-08 18:27:28.844275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.666 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:35.926 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:35.926 [2024-10-08 18:27:29.237277] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:35.926 [2024-10-08 18:27:29.237505] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.185 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:36.185 malloc0 00:19:36.185 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:36.444 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oAhvjHIcuA 00:19:36.702 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:36.962 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=444794 00:19:36.962 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.962 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.962 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 444794 /var/tmp/bdevperf.sock 00:19:36.962 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 444794 ']' 00:19:36.962 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.962 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:36.962 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.962 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:36.962 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.962 [2024-10-08 18:27:30.116310] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:36.962 [2024-10-08 18:27:30.116360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444794 ] 00:19:36.962 [2024-10-08 18:27:30.184131] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.962 [2024-10-08 18:27:30.256060] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.221 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:37.221 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:37.221 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oAhvjHIcuA 00:19:37.480 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:37.480 [2024-10-08 18:27:30.710740] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.480 TLSTESTn1 00:19:37.739 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:37.999 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:37.999 "subsystems": [ 00:19:37.999 { 00:19:37.999 "subsystem": "keyring", 00:19:37.999 "config": [ 00:19:37.999 { 00:19:37.999 "method": "keyring_file_add_key", 00:19:37.999 "params": { 00:19:37.999 "name": "key0", 00:19:37.999 "path": "/tmp/tmp.oAhvjHIcuA" 00:19:37.999 } 00:19:37.999 } 00:19:37.999 ] 00:19:37.999 }, 00:19:37.999 { 00:19:37.999 "subsystem": "iobuf", 00:19:37.999 "config": [ 00:19:37.999 { 00:19:37.999 "method": "iobuf_set_options", 00:19:37.999 "params": { 00:19:37.999 "small_pool_count": 8192, 00:19:37.999 "large_pool_count": 1024, 00:19:37.999 "small_bufsize": 8192, 00:19:37.999 "large_bufsize": 135168 00:19:37.999 } 00:19:37.999 } 00:19:37.999 ] 00:19:37.999 }, 00:19:37.999 { 00:19:37.999 "subsystem": "sock", 00:19:37.999 "config": [ 00:19:37.999 { 00:19:37.999 "method": "sock_set_default_impl", 00:19:37.999 "params": { 00:19:37.999 "impl_name": "posix" 00:19:37.999 } 00:19:37.999 }, 00:19:37.999 { 00:19:37.999 "method": "sock_impl_set_options", 00:19:37.999 "params": { 00:19:37.999 "impl_name": "ssl", 00:19:37.999 "recv_buf_size": 4096, 00:19:37.999 "send_buf_size": 4096, 00:19:37.999 "enable_recv_pipe": true, 00:19:37.999 "enable_quickack": false, 00:19:37.999 "enable_placement_id": 0, 00:19:37.999 "enable_zerocopy_send_server": true, 00:19:37.999 "enable_zerocopy_send_client": false, 00:19:37.999 "zerocopy_threshold": 0, 00:19:37.999 "tls_version": 0, 00:19:37.999 "enable_ktls": false 00:19:37.999 } 00:19:37.999 }, 00:19:37.999 { 00:19:37.999 "method": "sock_impl_set_options", 00:19:37.999 "params": { 00:19:37.999 "impl_name": "posix", 00:19:37.999 "recv_buf_size": 2097152, 00:19:37.999 "send_buf_size": 2097152, 00:19:37.999 "enable_recv_pipe": true, 00:19:37.999 "enable_quickack": false, 00:19:37.999 "enable_placement_id": 0, 00:19:37.999 "enable_zerocopy_send_server": true, 00:19:37.999 "enable_zerocopy_send_client": false, 00:19:37.999 "zerocopy_threshold": 0, 00:19:37.999 "tls_version": 0, 00:19:37.999 "enable_ktls": false 00:19:37.999 } 00:19:37.999 } 00:19:37.999 ] 00:19:37.999 }, 00:19:37.999 { 00:19:37.999 "subsystem": "vmd", 00:19:37.999 "config": [] 00:19:37.999 }, 00:19:37.999 { 00:19:37.999 "subsystem": "accel", 00:19:37.999 "config": [ 00:19:37.999 { 00:19:37.999 "method": "accel_set_options", 00:19:37.999 "params": { 00:19:37.999 "small_cache_size": 128, 00:19:37.999 "large_cache_size": 16, 00:19:37.999 "task_count": 2048, 00:19:37.999 "sequence_count": 2048, 00:19:37.999 "buf_count": 2048 00:19:37.999 } 00:19:37.999 } 00:19:37.999 ] 00:19:37.999 }, 00:19:37.999 { 00:19:37.999 "subsystem": "bdev", 00:19:37.999 "config": [ 00:19:37.999 { 00:19:37.999 "method": "bdev_set_options", 00:19:37.999 "params": { 00:19:37.999 "bdev_io_pool_size": 65535, 00:19:37.999 "bdev_io_cache_size": 256, 00:19:37.999 "bdev_auto_examine": true, 00:19:37.999 "iobuf_small_cache_size": 128, 00:19:37.999 "iobuf_large_cache_size": 16 00:19:37.999 } 00:19:37.999 }, 00:19:37.999 { 00:19:37.999 "method": "bdev_raid_set_options", 00:19:37.999 "params": { 00:19:37.999 "process_window_size_kb": 1024, 00:19:37.999 "process_max_bandwidth_mb_sec": 0 00:19:37.999 } 00:19:37.999 }, 00:19:37.999 { 00:19:37.999 "method": "bdev_iscsi_set_options", 00:19:37.999 "params": { 00:19:37.999 "timeout_sec": 30 00:19:37.999 } 00:19:37.999 }, 00:19:37.999 { 00:19:37.999 "method": "bdev_nvme_set_options", 00:19:37.999 "params": { 00:19:37.999 "action_on_timeout": "none", 00:19:37.999 "timeout_us": 0, 00:19:37.999 "timeout_admin_us": 0, 00:19:37.999 "keep_alive_timeout_ms": 10000, 00:19:37.999 "arbitration_burst": 0, 00:19:37.999 "low_priority_weight": 0, 00:19:37.999 "medium_priority_weight": 0, 00:19:37.999 "high_priority_weight": 0, 00:19:37.999 "nvme_adminq_poll_period_us": 10000, 00:19:37.999 "nvme_ioq_poll_period_us": 0, 00:19:37.999 "io_queue_requests": 0, 00:19:37.999 "delay_cmd_submit": true, 00:19:37.999 "transport_retry_count": 4, 00:19:37.999 "bdev_retry_count": 3, 00:19:37.999 "transport_ack_timeout": 0, 00:19:37.999 "ctrlr_loss_timeout_sec": 0, 00:19:37.999 "reconnect_delay_sec": 0, 00:19:37.999 "fast_io_fail_timeout_sec": 0, 00:19:37.999 "disable_auto_failback": false, 00:19:37.999 "generate_uuids": false, 00:19:37.999 "transport_tos": 0, 00:19:37.999 "nvme_error_stat": false, 00:19:37.999 "rdma_srq_size": 0, 00:19:37.999 "io_path_stat": false, 00:19:37.999 "allow_accel_sequence": false, 00:19:37.999 "rdma_max_cq_size": 0, 00:19:37.999 "rdma_cm_event_timeout_ms": 0, 00:19:37.999 "dhchap_digests": [ 00:19:37.999 "sha256", 00:19:37.999 "sha384", 00:19:37.999 "sha512" 00:19:37.999 ], 00:19:37.999 "dhchap_dhgroups": [ 00:19:38.000 "null", 00:19:38.000 "ffdhe2048", 00:19:38.000 "ffdhe3072", 00:19:38.000 "ffdhe4096", 00:19:38.000 "ffdhe6144", 00:19:38.000 "ffdhe8192" 00:19:38.000 ] 00:19:38.000 } 00:19:38.000 }, 00:19:38.000 { 00:19:38.000 "method": "bdev_nvme_set_hotplug", 00:19:38.000 "params": { 00:19:38.000 "period_us": 100000, 00:19:38.000 "enable": false 00:19:38.000 } 00:19:38.000 }, 00:19:38.000 { 00:19:38.000 "method": "bdev_malloc_create", 00:19:38.000 "params": { 00:19:38.000 "name": "malloc0", 00:19:38.000 "num_blocks": 8192, 00:19:38.000 "block_size": 4096, 00:19:38.000 "physical_block_size": 4096, 00:19:38.000 "uuid": "6db0109a-37d6-4782-a134-d41319e29a2d", 00:19:38.000 "optimal_io_boundary": 0, 00:19:38.000 "md_size": 0, 00:19:38.000 "dif_type": 0, 00:19:38.000 "dif_is_head_of_md": false, 00:19:38.000 "dif_pi_format": 0 00:19:38.000 } 00:19:38.000 }, 00:19:38.000 { 00:19:38.000 "method": "bdev_wait_for_examine" 00:19:38.000 } 00:19:38.000 ] 00:19:38.000 }, 00:19:38.000 { 00:19:38.000 "subsystem": "nbd", 00:19:38.000 "config": [] 00:19:38.000 }, 00:19:38.000 { 00:19:38.000 "subsystem": "scheduler", 00:19:38.000 "config": [ 00:19:38.000 { 00:19:38.000 "method": "framework_set_scheduler", 00:19:38.000 "params": { 00:19:38.000 "name": "static" 00:19:38.000 } 00:19:38.000 } 00:19:38.000 ] 00:19:38.000 }, 00:19:38.000 { 00:19:38.000 "subsystem": "nvmf", 00:19:38.000 "config": [ 00:19:38.000 { 00:19:38.000 "method": "nvmf_set_config", 00:19:38.000 "params": { 00:19:38.000 "discovery_filter": "match_any", 00:19:38.000 "admin_cmd_passthru": { 00:19:38.000 "identify_ctrlr": false 00:19:38.000 }, 00:19:38.000 "dhchap_digests": [ 00:19:38.000 "sha256", 00:19:38.000 "sha384", 00:19:38.000 "sha512" 00:19:38.000 ], 00:19:38.000 "dhchap_dhgroups": [ 00:19:38.000 "null", 00:19:38.000 "ffdhe2048", 00:19:38.000 "ffdhe3072", 00:19:38.000 "ffdhe4096", 00:19:38.000 "ffdhe6144", 00:19:38.000 "ffdhe8192" 00:19:38.000 ] 00:19:38.000 } 00:19:38.000 }, 00:19:38.000 { 00:19:38.000 "method": "nvmf_set_max_subsystems", 00:19:38.000 "params": { 00:19:38.000 "max_subsystems": 1024 00:19:38.000 } 00:19:38.000 }, 00:19:38.000 { 00:19:38.000 "method": "nvmf_set_crdt", 00:19:38.000 "params": { 00:19:38.000 "crdt1": 0, 00:19:38.000 "crdt2": 0, 00:19:38.000 "crdt3": 0 00:19:38.000 } 00:19:38.000 }, 00:19:38.000 { 00:19:38.000 "method": "nvmf_create_transport", 00:19:38.000 "params": { 00:19:38.000 "trtype": "TCP", 00:19:38.000 "max_queue_depth": 128, 00:19:38.000 "max_io_qpairs_per_ctrlr": 127, 00:19:38.000 "in_capsule_data_size": 4096, 00:19:38.000 "max_io_size": 131072, 00:19:38.000 "io_unit_size": 131072, 00:19:38.000 "max_aq_depth": 128, 00:19:38.000 "num_shared_buffers": 511, 00:19:38.000 "buf_cache_size": 4294967295, 00:19:38.000 "dif_insert_or_strip": false, 00:19:38.000 "zcopy": false, 00:19:38.000 "c2h_success": false, 00:19:38.000 "sock_priority": 0, 00:19:38.000 "abort_timeout_sec": 1, 00:19:38.000 "ack_timeout": 0, 00:19:38.000 "data_wr_pool_size": 0 00:19:38.000 } 00:19:38.000 }, 00:19:38.000 { 00:19:38.000 "method": "nvmf_create_subsystem", 00:19:38.000 "params": { 00:19:38.000 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.000 "allow_any_host": false, 00:19:38.000 "serial_number": "SPDK00000000000001", 00:19:38.000 "model_number": "SPDK bdev Controller", 00:19:38.000 "max_namespaces": 10, 00:19:38.000 "min_cntlid": 1, 00:19:38.000 "max_cntlid": 65519, 00:19:38.000 "ana_reporting": false 00:19:38.000 } 00:19:38.000 }, 00:19:38.000 { 00:19:38.000 "method": "nvmf_subsystem_add_host", 00:19:38.000 "params": { 00:19:38.000 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.000 "host": "nqn.2016-06.io.spdk:host1", 00:19:38.000 "psk": "key0" 00:19:38.000 } 00:19:38.000 }, 00:19:38.000 { 00:19:38.000 "method": "nvmf_subsystem_add_ns", 00:19:38.000 "params": { 00:19:38.000 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.000 "namespace": { 00:19:38.000 "nsid": 1, 00:19:38.000 "bdev_name": "malloc0", 00:19:38.000 "nguid": "6DB0109A37D64782A134D41319E29A2D", 00:19:38.000 "uuid": "6db0109a-37d6-4782-a134-d41319e29a2d", 00:19:38.000 "no_auto_visible": false 00:19:38.000 } 00:19:38.000 } 00:19:38.000 }, 00:19:38.000 { 00:19:38.000 "method": "nvmf_subsystem_add_listener", 00:19:38.000 "params": { 00:19:38.000 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.000 "listen_address": { 00:19:38.000 "trtype": "TCP", 00:19:38.000 "adrfam": "IPv4", 00:19:38.000 "traddr": "10.0.0.2", 00:19:38.000 "trsvcid": "4420" 00:19:38.000 }, 00:19:38.000 "secure_channel": true 00:19:38.000 } 00:19:38.000 } 00:19:38.000 ] 00:19:38.000 } 00:19:38.000 ] 00:19:38.000 }' 00:19:38.000 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:38.260 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:38.260 "subsystems": [ 00:19:38.260 { 00:19:38.260 "subsystem": "keyring", 00:19:38.260 "config": [ 00:19:38.260 { 00:19:38.260 "method": "keyring_file_add_key", 00:19:38.260 "params": { 00:19:38.260 "name": "key0", 00:19:38.260 "path": "/tmp/tmp.oAhvjHIcuA" 00:19:38.260 } 00:19:38.260 } 00:19:38.260 ] 00:19:38.260 }, 00:19:38.260 { 00:19:38.260 "subsystem": "iobuf", 00:19:38.260 "config": [ 00:19:38.260 { 00:19:38.260 "method": "iobuf_set_options", 00:19:38.260 "params": { 00:19:38.260 "small_pool_count": 8192, 00:19:38.260 "large_pool_count": 1024, 00:19:38.260 "small_bufsize": 8192, 00:19:38.260 "large_bufsize": 135168 00:19:38.260 } 00:19:38.260 } 00:19:38.260 ] 00:19:38.260 }, 00:19:38.260 { 00:19:38.260 "subsystem": "sock", 00:19:38.260 "config": [ 00:19:38.260 { 00:19:38.260 "method": "sock_set_default_impl", 00:19:38.260 "params": { 00:19:38.260 "impl_name": "posix" 00:19:38.260 } 00:19:38.260 }, 00:19:38.260 { 00:19:38.260 "method": "sock_impl_set_options", 00:19:38.260 "params": { 00:19:38.260 "impl_name": "ssl", 00:19:38.260 "recv_buf_size": 4096, 00:19:38.260 "send_buf_size": 4096, 00:19:38.260 "enable_recv_pipe": true, 00:19:38.260 "enable_quickack": false, 00:19:38.260 "enable_placement_id": 0, 00:19:38.260 "enable_zerocopy_send_server": true, 00:19:38.260 "enable_zerocopy_send_client": false, 00:19:38.260 "zerocopy_threshold": 0, 00:19:38.260 "tls_version": 0, 00:19:38.260 "enable_ktls": false 00:19:38.260 } 00:19:38.260 }, 00:19:38.260 { 00:19:38.260 "method": "sock_impl_set_options", 00:19:38.260 "params": { 00:19:38.260 "impl_name": "posix", 00:19:38.260 "recv_buf_size": 2097152, 00:19:38.260 "send_buf_size": 2097152, 00:19:38.260 "enable_recv_pipe": true, 00:19:38.260 "enable_quickack": false, 00:19:38.260 "enable_placement_id": 0, 00:19:38.260 "enable_zerocopy_send_server": true, 00:19:38.260 "enable_zerocopy_send_client": false, 00:19:38.260 "zerocopy_threshold": 0, 00:19:38.260 "tls_version": 0, 00:19:38.260 "enable_ktls": false 00:19:38.260 } 00:19:38.260 } 00:19:38.260 ] 00:19:38.260 }, 00:19:38.260 { 00:19:38.260 "subsystem": "vmd", 00:19:38.260 "config": [] 00:19:38.260 }, 00:19:38.260 { 00:19:38.260 "subsystem": "accel", 00:19:38.260 "config": [ 00:19:38.260 { 00:19:38.260 "method": "accel_set_options", 00:19:38.260 "params": { 00:19:38.260 "small_cache_size": 128, 00:19:38.260 "large_cache_size": 16, 00:19:38.260 "task_count": 2048, 00:19:38.260 "sequence_count": 2048, 00:19:38.260 "buf_count": 2048 00:19:38.260 } 00:19:38.260 } 00:19:38.260 ] 00:19:38.260 }, 00:19:38.260 { 00:19:38.260 "subsystem": "bdev", 00:19:38.260 "config": [ 00:19:38.260 { 00:19:38.260 "method": "bdev_set_options", 00:19:38.260 "params": { 00:19:38.260 "bdev_io_pool_size": 65535, 00:19:38.260 "bdev_io_cache_size": 256, 00:19:38.260 "bdev_auto_examine": true, 00:19:38.260 "iobuf_small_cache_size": 128, 00:19:38.260 "iobuf_large_cache_size": 16 00:19:38.260 } 00:19:38.260 }, 00:19:38.260 { 00:19:38.260 "method": "bdev_raid_set_options", 00:19:38.260 "params": { 00:19:38.260 "process_window_size_kb": 1024, 00:19:38.260 "process_max_bandwidth_mb_sec": 0 00:19:38.260 } 00:19:38.260 }, 00:19:38.260 { 00:19:38.260 "method": "bdev_iscsi_set_options", 00:19:38.260 "params": { 00:19:38.260 "timeout_sec": 30 00:19:38.260 } 00:19:38.260 }, 00:19:38.260 { 00:19:38.260 "method": "bdev_nvme_set_options", 00:19:38.260 "params": { 00:19:38.260 "action_on_timeout": "none", 00:19:38.260 "timeout_us": 0, 00:19:38.260 "timeout_admin_us": 0, 00:19:38.260 "keep_alive_timeout_ms": 10000, 00:19:38.260 "arbitration_burst": 0, 00:19:38.260 "low_priority_weight": 0, 00:19:38.260 "medium_priority_weight": 0, 00:19:38.260 "high_priority_weight": 0, 00:19:38.260 "nvme_adminq_poll_period_us": 10000, 00:19:38.260 "nvme_ioq_poll_period_us": 0, 00:19:38.260 "io_queue_requests": 512, 00:19:38.260 "delay_cmd_submit": true, 00:19:38.260 "transport_retry_count": 4, 00:19:38.260 "bdev_retry_count": 3, 00:19:38.260 "transport_ack_timeout": 0, 00:19:38.260 "ctrlr_loss_timeout_sec": 0, 00:19:38.260 "reconnect_delay_sec": 0, 00:19:38.260 "fast_io_fail_timeout_sec": 0, 00:19:38.260 "disable_auto_failback": false, 00:19:38.260 "generate_uuids": false, 00:19:38.260 "transport_tos": 0, 00:19:38.260 "nvme_error_stat": false, 00:19:38.260 "rdma_srq_size": 0, 00:19:38.260 "io_path_stat": false, 00:19:38.260 "allow_accel_sequence": false, 00:19:38.260 "rdma_max_cq_size": 0, 00:19:38.260 "rdma_cm_event_timeout_ms": 0, 00:19:38.260 "dhchap_digests": [ 00:19:38.260 "sha256", 00:19:38.260 "sha384", 00:19:38.260 "sha512" 00:19:38.260 ], 00:19:38.260 "dhchap_dhgroups": [ 00:19:38.260 "null", 00:19:38.260 "ffdhe2048", 00:19:38.260 "ffdhe3072", 00:19:38.260 "ffdhe4096", 00:19:38.260 "ffdhe6144", 00:19:38.260 "ffdhe8192" 00:19:38.260 ] 00:19:38.260 } 00:19:38.260 }, 00:19:38.260 { 00:19:38.260 "method": "bdev_nvme_attach_controller", 00:19:38.260 "params": { 00:19:38.260 "name": "TLSTEST", 00:19:38.260 "trtype": "TCP", 00:19:38.260 "adrfam": "IPv4", 00:19:38.260 "traddr": "10.0.0.2", 00:19:38.261 "trsvcid": "4420", 00:19:38.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.261 "prchk_reftag": false, 00:19:38.261 "prchk_guard": false, 00:19:38.261 "ctrlr_loss_timeout_sec": 0, 00:19:38.261 "reconnect_delay_sec": 0, 00:19:38.261 "fast_io_fail_timeout_sec": 0, 00:19:38.261 "psk": "key0", 00:19:38.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:38.261 "hdgst": false, 00:19:38.261 "ddgst": false, 00:19:38.261 "multipath": "multipath" 00:19:38.261 } 00:19:38.261 }, 00:19:38.261 { 00:19:38.261 "method": "bdev_nvme_set_hotplug", 00:19:38.261 "params": { 00:19:38.261 "period_us": 100000, 00:19:38.261 "enable": false 00:19:38.261 } 00:19:38.261 }, 00:19:38.261 { 00:19:38.261 "method": "bdev_wait_for_examine" 00:19:38.261 } 00:19:38.261 ] 00:19:38.261 }, 00:19:38.261 { 00:19:38.261 "subsystem": "nbd", 00:19:38.261 "config": [] 00:19:38.261 } 00:19:38.261 ] 00:19:38.261 }' 00:19:38.261 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 444794 00:19:38.261 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 444794 ']' 00:19:38.261 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 444794 00:19:38.261 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:38.261 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:38.261 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 444794 00:19:38.261 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:38.261 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:38.261 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 444794' 00:19:38.261 killing process with pid 444794 00:19:38.261 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 444794 00:19:38.261 Received shutdown signal, test time was about 10.000000 seconds 00:19:38.261 00:19:38.261 Latency(us) 00:19:38.261 [2024-10-08T16:27:31.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.261 [2024-10-08T16:27:31.584Z] =================================================================================================================== 00:19:38.261 [2024-10-08T16:27:31.584Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:38.261 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 444794 00:19:38.261 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 444467 00:19:38.261 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 444467 ']' 00:19:38.261 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 444467 00:19:38.261 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:38.261 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:38.261 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 444467 00:19:38.521 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:38.521 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:38.521 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 444467' 00:19:38.521 killing process with pid 444467 00:19:38.521 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 444467 00:19:38.521 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 444467 00:19:38.521 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:38.521 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:38.521 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.521 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:38.521 "subsystems": [ 00:19:38.521 { 00:19:38.521 "subsystem": "keyring", 00:19:38.521 "config": [ 00:19:38.521 { 00:19:38.521 "method": "keyring_file_add_key", 00:19:38.521 "params": { 00:19:38.521 "name": "key0", 00:19:38.521 "path": "/tmp/tmp.oAhvjHIcuA" 00:19:38.521 } 00:19:38.521 } 00:19:38.521 ] 00:19:38.521 }, 00:19:38.521 { 00:19:38.521 "subsystem": "iobuf", 00:19:38.521 "config": [ 00:19:38.521 { 00:19:38.521 "method": "iobuf_set_options", 00:19:38.521 "params": { 00:19:38.521 "small_pool_count": 8192, 00:19:38.521 "large_pool_count": 1024, 00:19:38.521 "small_bufsize": 8192, 00:19:38.521 "large_bufsize": 135168 00:19:38.521 } 00:19:38.521 } 00:19:38.521 ] 00:19:38.521 }, 00:19:38.521 { 00:19:38.521 "subsystem": "sock", 00:19:38.521 "config": [ 00:19:38.521 { 00:19:38.521 "method": "sock_set_default_impl", 00:19:38.521 "params": { 00:19:38.521 "impl_name": "posix" 00:19:38.521 } 00:19:38.521 }, 00:19:38.521 { 00:19:38.521 "method": "sock_impl_set_options", 00:19:38.521 "params": { 00:19:38.521 "impl_name": "ssl", 00:19:38.521 "recv_buf_size": 4096, 00:19:38.521 "send_buf_size": 4096, 00:19:38.521 "enable_recv_pipe": true, 00:19:38.521 "enable_quickack": false, 00:19:38.521 "enable_placement_id": 0, 00:19:38.521 "enable_zerocopy_send_server": true, 00:19:38.521 "enable_zerocopy_send_client": false, 00:19:38.521 "zerocopy_threshold": 0, 00:19:38.521 "tls_version": 0, 00:19:38.521 "enable_ktls": false 00:19:38.521 } 00:19:38.521 }, 00:19:38.521 { 00:19:38.521 "method": "sock_impl_set_options", 00:19:38.521 "params": { 00:19:38.521 "impl_name": "posix", 00:19:38.521 "recv_buf_size": 2097152, 00:19:38.521 "send_buf_size": 2097152, 00:19:38.521 "enable_recv_pipe": true, 00:19:38.521 "enable_quickack": false, 00:19:38.521 "enable_placement_id": 0, 00:19:38.521 "enable_zerocopy_send_server": true, 00:19:38.521 "enable_zerocopy_send_client": false, 00:19:38.521 "zerocopy_threshold": 0, 00:19:38.521 "tls_version": 0, 00:19:38.521 "enable_ktls": false 00:19:38.521 } 00:19:38.521 } 00:19:38.521 ] 00:19:38.521 }, 00:19:38.521 { 00:19:38.521 "subsystem": "vmd", 00:19:38.521 "config": [] 00:19:38.521 }, 00:19:38.521 { 00:19:38.521 "subsystem": "accel", 00:19:38.521 "config": [ 00:19:38.521 { 00:19:38.521 "method": "accel_set_options", 00:19:38.521 "params": { 00:19:38.521 "small_cache_size": 128, 00:19:38.521 "large_cache_size": 16, 00:19:38.521 "task_count": 2048, 00:19:38.521 "sequence_count": 2048, 00:19:38.521 "buf_count": 2048 00:19:38.521 } 00:19:38.521 } 00:19:38.521 ] 00:19:38.521 }, 00:19:38.521 { 00:19:38.521 "subsystem": "bdev", 00:19:38.521 "config": [ 00:19:38.521 { 00:19:38.521 "method": "bdev_set_options", 00:19:38.521 "params": { 00:19:38.521 "bdev_io_pool_size": 65535, 00:19:38.521 "bdev_io_cache_size": 256, 00:19:38.521 "bdev_auto_examine": true, 00:19:38.521 "iobuf_small_cache_size": 128, 00:19:38.521 "iobuf_large_cache_size": 16 00:19:38.521 } 00:19:38.521 }, 00:19:38.521 { 00:19:38.521 "method": "bdev_raid_set_options", 00:19:38.521 "params": { 00:19:38.521 "process_window_size_kb": 1024, 00:19:38.521 "process_max_bandwidth_mb_sec": 0 00:19:38.521 } 00:19:38.521 }, 00:19:38.521 { 00:19:38.521 "method": "bdev_iscsi_set_options", 00:19:38.521 "params": { 00:19:38.521 "timeout_sec": 30 00:19:38.521 } 00:19:38.521 }, 00:19:38.521 { 00:19:38.521 "method": "bdev_nvme_set_options", 00:19:38.521 "params": { 00:19:38.521 "action_on_timeout": "none", 00:19:38.521 "timeout_us": 0, 00:19:38.521 "timeout_admin_us": 0, 00:19:38.521 "keep_alive_timeout_ms": 10000, 00:19:38.521 "arbitration_burst": 0, 00:19:38.521 "low_priority_weight": 0, 00:19:38.521 "medium_priority_weight": 0, 00:19:38.521 "high_priority_weight": 0, 00:19:38.521 "nvme_adminq_poll_period_us": 10000, 00:19:38.521 "nvme_ioq_poll_period_us": 0, 00:19:38.521 "io_queue_requests": 0, 00:19:38.521 "delay_cmd_submit": true, 00:19:38.521 "transport_retry_count": 4, 00:19:38.521 "bdev_retry_count": 3, 00:19:38.521 "transport_ack_timeout": 0, 00:19:38.521 "ctrlr_loss_timeout_sec": 0, 00:19:38.521 "reconnect_delay_sec": 0, 00:19:38.521 "fast_io_fail_timeout_sec": 0, 00:19:38.521 "disable_auto_failback": false, 00:19:38.521 "generate_uuids": false, 00:19:38.522 "transport_tos": 0, 00:19:38.522 "nvme_error_stat": false, 00:19:38.522 "rdma_srq_size": 0, 00:19:38.522 "io_path_stat": false, 00:19:38.522 "allow_accel_sequence": false, 00:19:38.522 "rdma_max_cq_size": 0, 00:19:38.522 "rdma_cm_event_timeout_ms": 0, 00:19:38.522 "dhchap_digests": [ 00:19:38.522 "sha256", 00:19:38.522 "sha384", 00:19:38.522 "sha512" 00:19:38.522 ], 00:19:38.522 "dhchap_dhgroups": [ 00:19:38.522 "null", 00:19:38.522 "ffdhe2048", 00:19:38.522 "ffdhe3072", 00:19:38.522 "ffdhe4096", 00:19:38.522 "ffdhe6144", 00:19:38.522 "ffdhe8192" 00:19:38.522 ] 00:19:38.522 } 00:19:38.522 }, 00:19:38.522 { 00:19:38.522 "method": "bdev_nvme_set_hotplug", 00:19:38.522 "params": { 00:19:38.522 "period_us": 100000, 00:19:38.522 "enable": false 00:19:38.522 } 00:19:38.522 }, 00:19:38.522 { 00:19:38.522 "method": "bdev_malloc_create", 00:19:38.522 "params": { 00:19:38.522 "name": "malloc0", 00:19:38.522 "num_blocks": 8192, 00:19:38.522 "block_size": 4096, 00:19:38.522 "physical_block_size": 4096, 00:19:38.522 "uuid": "6db0109a-37d6-4782-a134-d41319e29a2d", 00:19:38.522 "optimal_io_boundary": 0, 00:19:38.522 "md_size": 0, 00:19:38.522 "dif_type": 0, 00:19:38.522 "dif_is_head_of_md": false, 00:19:38.522 "dif_pi_format": 0 00:19:38.522 } 00:19:38.522 }, 00:19:38.522 { 00:19:38.522 "method": "bdev_wait_for_examine" 00:19:38.522 } 00:19:38.522 ] 00:19:38.522 }, 00:19:38.522 { 00:19:38.522 "subsystem": "nbd", 00:19:38.522 "config": [] 00:19:38.522 }, 00:19:38.522 { 00:19:38.522 "subsystem": "scheduler", 00:19:38.522 "config": [ 00:19:38.522 { 00:19:38.522 "method": "framework_set_scheduler", 00:19:38.522 "params": { 00:19:38.522 "name": "static" 00:19:38.522 } 00:19:38.522 } 00:19:38.522 ] 00:19:38.522 }, 00:19:38.522 { 00:19:38.522 "subsystem": "nvmf", 00:19:38.522 "config": [ 00:19:38.522 { 00:19:38.522 "method": "nvmf_set_config", 00:19:38.522 "params": { 00:19:38.522 "discovery_filter": "match_any", 00:19:38.522 "admin_cmd_passthru": { 00:19:38.522 "identify_ctrlr": false 00:19:38.522 }, 00:19:38.522 "dhchap_digests": [ 00:19:38.522 "sha256", 00:19:38.522 "sha384", 00:19:38.522 "sha512" 00:19:38.522 ], 00:19:38.522 "dhchap_dhgroups": [ 00:19:38.522 "null", 00:19:38.522 "ffdhe2048", 00:19:38.522 "ffdhe3072", 00:19:38.522 "ffdhe4096", 00:19:38.522 "ffdhe6144", 00:19:38.522 "ffdhe8192" 00:19:38.522 ] 00:19:38.522 } 00:19:38.522 }, 00:19:38.522 { 00:19:38.522 "method": "nvmf_set_max_subsystems", 00:19:38.522 "params": { 00:19:38.522 "max_subsystems": 1024 00:19:38.522 } 00:19:38.522 }, 00:19:38.522 { 00:19:38.522 "method": "nvmf_set_crdt", 00:19:38.522 "params": { 00:19:38.522 "crdt1": 0, 00:19:38.522 "crdt2": 0, 00:19:38.522 "crdt3": 0 00:19:38.522 } 00:19:38.522 }, 00:19:38.522 { 00:19:38.522 "method": "nvmf_create_transport", 00:19:38.522 "params": { 00:19:38.522 "trtype": "TCP", 00:19:38.522 "max_queue_depth": 128, 00:19:38.522 "max_io_qpairs_per_ctrlr": 127, 00:19:38.522 "in_capsule_data_size": 4096, 00:19:38.522 "max_io_size": 131072, 00:19:38.522 "io_unit_size": 131072, 00:19:38.522 "max_aq_depth": 128, 00:19:38.522 "num_shared_buffers": 511, 00:19:38.522 "buf_cache_size": 4294967295, 00:19:38.522 "dif_insert_or_strip": false, 00:19:38.522 "zcopy": false, 00:19:38.522 "c2h_success": false, 00:19:38.522 "sock_priority": 0, 00:19:38.522 "abort_timeout_sec": 1, 00:19:38.522 "ack_timeout": 0, 00:19:38.522 "data_wr_pool_size": 0 00:19:38.522 } 00:19:38.522 }, 00:19:38.522 { 00:19:38.522 "method": "nvmf_create_subsystem", 00:19:38.522 "params": { 00:19:38.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.522 "allow_any_host": false, 00:19:38.522 "serial_number": "SPDK00000000000001", 00:19:38.522 "model_number": "SPDK bdev Controller", 00:19:38.522 "max_namespaces": 10, 00:19:38.522 "min_cntlid": 1, 00:19:38.522 "max_cntlid": 65519, 00:19:38.522 "ana_reporting": false 00:19:38.522 } 00:19:38.522 }, 00:19:38.522 { 00:19:38.522 "method": "nvmf_subsystem_add_host", 00:19:38.522 "params": { 00:19:38.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.522 "host": "nqn.2016-06.io.spdk:host1", 00:19:38.522 "psk": "key0" 00:19:38.522 } 00:19:38.522 }, 00:19:38.522 { 00:19:38.522 "method": "nvmf_subsystem_add_ns", 00:19:38.522 "params": { 00:19:38.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.522 "namespace": { 00:19:38.522 "nsid": 1, 00:19:38.522 "bdev_name": "malloc0", 00:19:38.522 "nguid": "6DB0109A37D64782A134D41319E29A2D", 00:19:38.522 "uuid": "6db0109a-37d6-4782-a134-d41319e29a2d", 00:19:38.522 "no_auto_visible": false 00:19:38.522 } 00:19:38.522 } 00:19:38.522 }, 00:19:38.522 { 00:19:38.522 "method": "nvmf_subsystem_add_listener", 00:19:38.522 "params": { 00:19:38.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.522 "listen_address": { 00:19:38.522 "trtype": "TCP", 00:19:38.522 "adrfam": "IPv4", 00:19:38.522 "traddr": "10.0.0.2", 00:19:38.522 "trsvcid": "4420" 00:19:38.522 }, 00:19:38.522 "secure_channel": true 00:19:38.522 } 00:19:38.522 } 00:19:38.522 ] 00:19:38.522 } 00:19:38.522 ] 00:19:38.522 }' 00:19:38.522 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.522 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=445211 00:19:38.522 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:38.522 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 445211 00:19:38.522 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 445211 ']' 00:19:38.522 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.522 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.522 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.522 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.522 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.782 [2024-10-08 18:27:31.866337] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:38.782 [2024-10-08 18:27:31.866384] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.782 [2024-10-08 18:27:31.926752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.782 [2024-10-08 18:27:32.000155] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.782 [2024-10-08 18:27:32.000195] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.782 [2024-10-08 18:27:32.000202] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.782 [2024-10-08 18:27:32.000208] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.782 [2024-10-08 18:27:32.000213] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.782 [2024-10-08 18:27:32.000838] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.040 [2024-10-08 18:27:32.226355] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.041 [2024-10-08 18:27:32.258212] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:39.041 [2024-10-08 18:27:32.258435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.608 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:39.608 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:39.608 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:39.608 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:39.608 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.608 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.608 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=445243 00:19:39.608 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 445243 /var/tmp/bdevperf.sock 00:19:39.608 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 445243 ']' 00:19:39.608 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.608 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:39.608 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:39.608 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.608 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:39.608 "subsystems": [ 00:19:39.608 { 00:19:39.608 "subsystem": "keyring", 00:19:39.608 "config": [ 00:19:39.608 { 00:19:39.608 "method": "keyring_file_add_key", 00:19:39.609 "params": { 00:19:39.609 "name": "key0", 00:19:39.609 "path": "/tmp/tmp.oAhvjHIcuA" 00:19:39.609 } 00:19:39.609 } 00:19:39.609 ] 00:19:39.609 }, 00:19:39.609 { 00:19:39.609 "subsystem": "iobuf", 00:19:39.609 "config": [ 00:19:39.609 { 00:19:39.609 "method": "iobuf_set_options", 00:19:39.609 "params": { 00:19:39.609 "small_pool_count": 8192, 00:19:39.609 "large_pool_count": 1024, 00:19:39.609 "small_bufsize": 8192, 00:19:39.609 "large_bufsize": 135168 00:19:39.609 } 00:19:39.609 } 00:19:39.609 ] 00:19:39.609 }, 00:19:39.609 { 00:19:39.609 "subsystem": "sock", 00:19:39.609 "config": [ 00:19:39.609 { 00:19:39.609 "method": "sock_set_default_impl", 00:19:39.609 "params": { 00:19:39.609 "impl_name": "posix" 00:19:39.609 } 00:19:39.609 }, 00:19:39.609 { 00:19:39.609 "method": "sock_impl_set_options", 00:19:39.609 "params": { 00:19:39.609 "impl_name": "ssl", 00:19:39.609 "recv_buf_size": 4096, 00:19:39.609 "send_buf_size": 4096, 00:19:39.609 "enable_recv_pipe": true, 00:19:39.609 "enable_quickack": false, 00:19:39.609 "enable_placement_id": 0, 00:19:39.609 "enable_zerocopy_send_server": true, 00:19:39.609 "enable_zerocopy_send_client": false, 00:19:39.609 "zerocopy_threshold": 0, 00:19:39.609 "tls_version": 0, 00:19:39.609 "enable_ktls": false 00:19:39.609 } 00:19:39.609 }, 00:19:39.609 { 00:19:39.609 "method": "sock_impl_set_options", 00:19:39.609 "params": { 00:19:39.609 "impl_name": "posix", 00:19:39.609 "recv_buf_size": 2097152, 00:19:39.609 "send_buf_size": 2097152, 00:19:39.609 "enable_recv_pipe": true, 00:19:39.609 "enable_quickack": false, 00:19:39.609 "enable_placement_id": 0, 00:19:39.609 "enable_zerocopy_send_server": true, 00:19:39.609 "enable_zerocopy_send_client": false, 00:19:39.609 "zerocopy_threshold": 0, 00:19:39.609 "tls_version": 0, 00:19:39.609 "enable_ktls": false 00:19:39.609 } 00:19:39.609 } 00:19:39.609 ] 00:19:39.609 }, 00:19:39.609 { 00:19:39.609 "subsystem": "vmd", 00:19:39.609 "config": [] 00:19:39.609 }, 00:19:39.609 { 00:19:39.609 "subsystem": "accel", 00:19:39.609 "config": [ 00:19:39.609 { 00:19:39.609 "method": "accel_set_options", 00:19:39.609 "params": { 00:19:39.609 "small_cache_size": 128, 00:19:39.609 "large_cache_size": 16, 00:19:39.609 "task_count": 2048, 00:19:39.609 "sequence_count": 2048, 00:19:39.609 "buf_count": 2048 00:19:39.609 } 00:19:39.609 } 00:19:39.609 ] 00:19:39.609 }, 00:19:39.609 { 00:19:39.609 "subsystem": "bdev", 00:19:39.609 "config": [ 00:19:39.609 { 00:19:39.609 "method": "bdev_set_options", 00:19:39.609 "params": { 00:19:39.609 "bdev_io_pool_size": 65535, 00:19:39.609 "bdev_io_cache_size": 256, 00:19:39.609 "bdev_auto_examine": true, 00:19:39.609 "iobuf_small_cache_size": 128, 00:19:39.609 "iobuf_large_cache_size": 16 00:19:39.609 } 00:19:39.609 }, 00:19:39.609 { 00:19:39.609 "method": "bdev_raid_set_options", 00:19:39.609 "params": { 00:19:39.609 "process_window_size_kb": 1024, 00:19:39.609 "process_max_bandwidth_mb_sec": 0 00:19:39.609 } 00:19:39.609 }, 00:19:39.609 { 00:19:39.609 "method": "bdev_iscsi_set_options", 00:19:39.609 "params": { 00:19:39.609 "timeout_sec": 30 00:19:39.609 } 00:19:39.609 }, 00:19:39.609 { 00:19:39.609 "method": "bdev_nvme_set_options", 00:19:39.609 "params": { 00:19:39.609 "action_on_timeout": "none", 00:19:39.609 "timeout_us": 0, 00:19:39.609 "timeout_admin_us": 0, 00:19:39.609 "keep_alive_timeout_ms": 10000, 00:19:39.609 "arbitration_burst": 0, 00:19:39.609 "low_priority_weight": 0, 00:19:39.609 "medium_priority_weight": 0, 00:19:39.609 "high_priority_weight": 0, 00:19:39.609 "nvme_adminq_poll_period_us": 10000, 00:19:39.609 "nvme_ioq_poll_period_us": 0, 00:19:39.609 "io_queue_requests": 512, 00:19:39.609 "delay_cmd_submit": true, 00:19:39.609 "transport_retry_count": 4, 00:19:39.609 "bdev_retry_count": 3, 00:19:39.609 "transport_ack_timeout": 0, 00:19:39.609 "ctrlr_loss_timeout_sec": 0, 00:19:39.609 "reconnect_delay_sec": 0, 00:19:39.609 "fast_io_fail_timeout_sec": 0, 00:19:39.609 "disable_auto_failback": false, 00:19:39.609 "generate_uuids": false, 00:19:39.609 "transport_tos": 0, 00:19:39.609 "nvme_error_stat": false, 00:19:39.609 "rdma_srq_size": 0, 00:19:39.609 "io_path_stat": false, 00:19:39.609 "allow_accel_sequence": false, 00:19:39.609 "rdma_max_cq_size": 0, 00:19:39.609 "rdma_cm_event_timeout_ms": 0, 00:19:39.609 "dhchap_digests": [ 00:19:39.609 "sha256", 00:19:39.609 "sha384", 00:19:39.609 "sha512" 00:19:39.609 ], 00:19:39.609 "dhchap_dhgroups": [ 00:19:39.609 "null", 00:19:39.609 "ffdhe2048", 00:19:39.609 "ffdhe3072", 00:19:39.609 "ffdhe4096", 00:19:39.609 "ffdhe6144", 00:19:39.609 "ffdhe8192" 00:19:39.609 ] 00:19:39.609 } 00:19:39.609 }, 00:19:39.609 { 00:19:39.609 "method": "bdev_nvme_attach_controller", 00:19:39.609 "params": { 00:19:39.609 "name": "TLSTEST", 00:19:39.609 "trtype": "TCP", 00:19:39.609 "adrfam": "IPv4", 00:19:39.609 "traddr": "10.0.0.2", 00:19:39.609 "trsvcid": "4420", 00:19:39.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.609 "prchk_reftag": false, 00:19:39.609 "prchk_guard": false, 00:19:39.609 "ctrlr_loss_timeout_sec": 0, 00:19:39.609 "reconnect_delay_sec": 0, 00:19:39.609 "fast_io_fail_timeout_sec": 0, 00:19:39.609 "psk": "key0", 00:19:39.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.609 "hdgst": false, 00:19:39.609 "ddgst": false, 00:19:39.609 "multipath": "multipath" 00:19:39.609 } 00:19:39.609 }, 00:19:39.609 { 00:19:39.609 "method": "bdev_nvme_set_hotplug", 00:19:39.609 "params": { 00:19:39.609 "period_us": 100000, 00:19:39.609 "enable": false 00:19:39.609 } 00:19:39.609 }, 00:19:39.609 { 00:19:39.609 "method": "bdev_wait_for_examine" 00:19:39.609 } 00:19:39.609 ] 00:19:39.609 }, 00:19:39.609 { 00:19:39.609 "subsystem": "nbd", 00:19:39.609 "config": [] 00:19:39.609 } 00:19:39.609 ] 00:19:39.609 }' 00:19:39.609 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:39.609 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.609 [2024-10-08 18:27:32.780324] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:39.609 [2024-10-08 18:27:32.780372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid445243 ] 00:19:39.609 [2024-10-08 18:27:32.850477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.609 [2024-10-08 18:27:32.921589] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.868 [2024-10-08 18:27:33.073625] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.437 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.437 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:40.437 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:40.437 Running I/O for 10 seconds... 00:19:42.752 5334.00 IOPS, 20.84 MiB/s [2024-10-08T16:27:37.012Z] 5507.50 IOPS, 21.51 MiB/s [2024-10-08T16:27:38.002Z] 5552.00 IOPS, 21.69 MiB/s [2024-10-08T16:27:38.979Z] 5547.00 IOPS, 21.67 MiB/s [2024-10-08T16:27:39.915Z] 5563.00 IOPS, 21.73 MiB/s [2024-10-08T16:27:40.852Z] 5582.50 IOPS, 21.81 MiB/s [2024-10-08T16:27:41.789Z] 5568.29 IOPS, 21.75 MiB/s [2024-10-08T16:27:43.166Z] 5585.00 IOPS, 21.82 MiB/s [2024-10-08T16:27:44.103Z] 5591.89 IOPS, 21.84 MiB/s [2024-10-08T16:27:44.103Z] 5585.40 IOPS, 21.82 MiB/s 00:19:50.780 Latency(us) 00:19:50.780 [2024-10-08T16:27:44.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.780 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:50.780 Verification LBA range: start 0x0 length 0x2000 00:19:50.780 TLSTESTn1 : 10.01 5590.98 21.84 0.00 0.00 22860.87 4774.77 30208.98 00:19:50.780 [2024-10-08T16:27:44.103Z] =================================================================================================================== 00:19:50.780 [2024-10-08T16:27:44.103Z] Total : 5590.98 21.84 0.00 0.00 22860.87 4774.77 30208.98 00:19:50.780 { 00:19:50.780 "results": [ 00:19:50.780 { 00:19:50.780 "job": "TLSTESTn1", 00:19:50.780 "core_mask": "0x4", 00:19:50.781 "workload": "verify", 00:19:50.781 "status": "finished", 00:19:50.781 "verify_range": { 00:19:50.781 "start": 0, 00:19:50.781 "length": 8192 00:19:50.781 }, 00:19:50.781 "queue_depth": 128, 00:19:50.781 "io_size": 4096, 00:19:50.781 "runtime": 10.012735, 00:19:50.781 "iops": 5590.979887113761, 00:19:50.781 "mibps": 21.83976518403813, 00:19:50.781 "io_failed": 0, 00:19:50.781 "io_timeout": 0, 00:19:50.781 "avg_latency_us": 22860.871769656544, 00:19:50.781 "min_latency_us": 4774.765714285714, 00:19:50.781 "max_latency_us": 30208.975238095238 00:19:50.781 } 00:19:50.781 ], 00:19:50.781 "core_count": 1 00:19:50.781 } 00:19:50.781 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:50.781 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 445243 00:19:50.781 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 445243 ']' 00:19:50.781 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 445243 00:19:50.781 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:50.781 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:50.781 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 445243 00:19:50.781 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:50.781 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:50.781 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 445243' 00:19:50.781 killing process with pid 445243 00:19:50.781 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 445243 00:19:50.781 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.781 00:19:50.781 Latency(us) 00:19:50.781 [2024-10-08T16:27:44.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.781 [2024-10-08T16:27:44.104Z] =================================================================================================================== 00:19:50.781 [2024-10-08T16:27:44.104Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.781 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 445243 00:19:50.781 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 445211 00:19:50.781 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 445211 ']' 00:19:50.781 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 445211 00:19:50.781 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:50.781 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:50.781 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 445211 00:19:50.781 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:50.781 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:50.781 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 445211' 00:19:50.781 killing process with pid 445211 00:19:50.781 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 445211 00:19:50.781 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 445211 00:19:51.040 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:51.040 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:51.040 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:51.040 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.040 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=447102 00:19:51.040 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:51.040 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 447102 00:19:51.040 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 447102 ']' 00:19:51.040 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.040 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:51.040 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.040 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:51.040 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.040 [2024-10-08 18:27:44.326134] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:51.040 [2024-10-08 18:27:44.326182] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.299 [2024-10-08 18:27:44.400859] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.299 [2024-10-08 18:27:44.477949] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.299 [2024-10-08 18:27:44.477987] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.299 [2024-10-08 18:27:44.477995] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.299 [2024-10-08 18:27:44.478000] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.299 [2024-10-08 18:27:44.478006] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.299 [2024-10-08 18:27:44.478579] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.866 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:51.866 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:51.866 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:51.866 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:51.866 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.125 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.125 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.oAhvjHIcuA 00:19:52.125 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oAhvjHIcuA 00:19:52.125 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:52.125 [2024-10-08 18:27:45.359992] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.125 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:52.384 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:52.644 [2024-10-08 18:27:45.740973] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:52.644 [2024-10-08 18:27:45.741172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.644 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:52.644 malloc0 00:19:52.903 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:52.903 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oAhvjHIcuA 00:19:53.162 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:53.421 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=447569 00:19:53.421 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:53.421 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:53.421 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 447569 /var/tmp/bdevperf.sock 00:19:53.421 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 447569 ']' 00:19:53.421 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.421 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:53.421 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.421 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:53.421 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.421 [2024-10-08 18:27:46.620740] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:53.421 [2024-10-08 18:27:46.620788] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid447569 ] 00:19:53.421 [2024-10-08 18:27:46.687408] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.681 [2024-10-08 18:27:46.761409] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.249 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:54.249 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:54.249 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oAhvjHIcuA 00:19:54.507 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:54.507 [2024-10-08 18:27:47.821416] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.766 nvme0n1 00:19:54.766 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:54.766 Running I/O for 1 seconds... 00:19:55.961 5413.00 IOPS, 21.14 MiB/s 00:19:55.961 Latency(us) 00:19:55.961 [2024-10-08T16:27:49.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.961 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:55.961 Verification LBA range: start 0x0 length 0x2000 00:19:55.961 nvme0n1 : 1.02 5437.44 21.24 0.00 0.00 23348.22 4681.14 32455.92 00:19:55.961 [2024-10-08T16:27:49.284Z] =================================================================================================================== 00:19:55.961 [2024-10-08T16:27:49.284Z] Total : 5437.44 21.24 0.00 0.00 23348.22 4681.14 32455.92 00:19:55.961 { 00:19:55.961 "results": [ 00:19:55.961 { 00:19:55.961 "job": "nvme0n1", 00:19:55.961 "core_mask": "0x2", 00:19:55.961 "workload": "verify", 00:19:55.961 "status": "finished", 00:19:55.961 "verify_range": { 00:19:55.961 "start": 0, 00:19:55.961 "length": 8192 00:19:55.961 }, 00:19:55.961 "queue_depth": 128, 00:19:55.961 "io_size": 4096, 00:19:55.961 "runtime": 1.019046, 00:19:55.961 "iops": 5437.438545463109, 00:19:55.961 "mibps": 21.23999431821527, 00:19:55.961 "io_failed": 0, 00:19:55.961 "io_timeout": 0, 00:19:55.961 "avg_latency_us": 23348.224343207774, 00:19:55.961 "min_latency_us": 4681.142857142857, 00:19:55.961 "max_latency_us": 32455.92380952381 00:19:55.961 } 00:19:55.961 ], 00:19:55.961 "core_count": 1 00:19:55.961 } 00:19:55.961 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 447569 00:19:55.961 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 447569 ']' 00:19:55.961 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 447569 00:19:55.961 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:55.961 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:55.961 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 447569 00:19:55.961 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:55.961 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:55.961 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 447569' 00:19:55.961 killing process with pid 447569 00:19:55.961 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 447569 00:19:55.961 Received shutdown signal, test time was about 1.000000 seconds 00:19:55.961 00:19:55.961 Latency(us) 00:19:55.961 [2024-10-08T16:27:49.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.961 [2024-10-08T16:27:49.284Z] =================================================================================================================== 00:19:55.961 [2024-10-08T16:27:49.284Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.961 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 447569 00:19:56.220 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 447102 00:19:56.220 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 447102 ']' 00:19:56.220 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 447102 00:19:56.220 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:56.220 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.220 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 447102 00:19:56.221 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:56.221 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:56.221 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 447102' 00:19:56.221 killing process with pid 447102 00:19:56.221 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 447102 00:19:56.221 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 447102 00:19:56.221 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:56.221 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:56.221 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:56.221 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.479 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=448044 00:19:56.479 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:56.479 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 448044 00:19:56.479 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 448044 ']' 00:19:56.480 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.480 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:56.480 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.480 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:56.480 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.480 [2024-10-08 18:27:49.595618] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:56.480 [2024-10-08 18:27:49.595663] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.480 [2024-10-08 18:27:49.667632] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.480 [2024-10-08 18:27:49.732549] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.480 [2024-10-08 18:27:49.732592] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.480 [2024-10-08 18:27:49.732599] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.480 [2024-10-08 18:27:49.732605] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.480 [2024-10-08 18:27:49.732609] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.480 [2024-10-08 18:27:49.733196] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.417 [2024-10-08 18:27:50.469725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.417 malloc0 00:19:57.417 [2024-10-08 18:27:50.515505] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:57.417 [2024-10-08 18:27:50.515779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=448289 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 448289 /var/tmp/bdevperf.sock 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 448289 ']' 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.417 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.417 [2024-10-08 18:27:50.593169] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:19:57.417 [2024-10-08 18:27:50.593217] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid448289 ] 00:19:57.417 [2024-10-08 18:27:50.661854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.676 [2024-10-08 18:27:50.741499] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.244 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.244 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:58.244 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oAhvjHIcuA 00:19:58.503 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:58.503 [2024-10-08 18:27:51.799083] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.761 nvme0n1 00:19:58.761 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:58.761 Running I/O for 1 seconds... 00:19:59.697 5463.00 IOPS, 21.34 MiB/s 00:19:59.697 Latency(us) 00:19:59.697 [2024-10-08T16:27:53.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.697 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:59.697 Verification LBA range: start 0x0 length 0x2000 00:19:59.697 nvme0n1 : 1.01 5515.79 21.55 0.00 0.00 23046.01 4556.31 23592.96 00:19:59.697 [2024-10-08T16:27:53.020Z] =================================================================================================================== 00:19:59.697 [2024-10-08T16:27:53.020Z] Total : 5515.79 21.55 0.00 0.00 23046.01 4556.31 23592.96 00:19:59.697 { 00:19:59.697 "results": [ 00:19:59.697 { 00:19:59.697 "job": "nvme0n1", 00:19:59.697 "core_mask": "0x2", 00:19:59.697 "workload": "verify", 00:19:59.697 "status": "finished", 00:19:59.697 "verify_range": { 00:19:59.697 "start": 0, 00:19:59.697 "length": 8192 00:19:59.697 }, 00:19:59.697 "queue_depth": 128, 00:19:59.697 "io_size": 4096, 00:19:59.697 "runtime": 1.013636, 00:19:59.697 "iops": 5515.7867321208005, 00:19:59.697 "mibps": 21.546041922346877, 00:19:59.697 "io_failed": 0, 00:19:59.697 "io_timeout": 0, 00:19:59.697 "avg_latency_us": 23046.00893885581, 00:19:59.697 "min_latency_us": 4556.312380952381, 00:19:59.697 "max_latency_us": 23592.96 00:19:59.697 } 00:19:59.697 ], 00:19:59.697 "core_count": 1 00:19:59.697 } 00:19:59.956 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:59.956 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.956 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.956 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.956 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:59.956 "subsystems": [ 00:19:59.956 { 00:19:59.956 "subsystem": "keyring", 00:19:59.956 "config": [ 00:19:59.956 { 00:19:59.956 "method": "keyring_file_add_key", 00:19:59.956 "params": { 00:19:59.956 "name": "key0", 00:19:59.956 "path": "/tmp/tmp.oAhvjHIcuA" 00:19:59.956 } 00:19:59.956 } 00:19:59.956 ] 00:19:59.956 }, 00:19:59.956 { 00:19:59.956 "subsystem": "iobuf", 00:19:59.956 "config": [ 00:19:59.956 { 00:19:59.956 "method": "iobuf_set_options", 00:19:59.956 "params": { 00:19:59.956 "small_pool_count": 8192, 00:19:59.956 "large_pool_count": 1024, 00:19:59.956 "small_bufsize": 8192, 00:19:59.956 "large_bufsize": 135168 00:19:59.956 } 00:19:59.956 } 00:19:59.956 ] 00:19:59.956 }, 00:19:59.956 { 00:19:59.956 "subsystem": "sock", 00:19:59.956 "config": [ 00:19:59.956 { 00:19:59.956 "method": "sock_set_default_impl", 00:19:59.956 "params": { 00:19:59.956 "impl_name": "posix" 00:19:59.956 } 00:19:59.956 }, 00:19:59.956 { 00:19:59.956 "method": "sock_impl_set_options", 00:19:59.956 "params": { 00:19:59.956 "impl_name": "ssl", 00:19:59.956 "recv_buf_size": 4096, 00:19:59.956 "send_buf_size": 4096, 00:19:59.956 "enable_recv_pipe": true, 00:19:59.956 "enable_quickack": false, 00:19:59.956 "enable_placement_id": 0, 00:19:59.956 "enable_zerocopy_send_server": true, 00:19:59.956 "enable_zerocopy_send_client": false, 00:19:59.956 "zerocopy_threshold": 0, 00:19:59.956 "tls_version": 0, 00:19:59.956 "enable_ktls": false 00:19:59.956 } 00:19:59.956 }, 00:19:59.956 { 00:19:59.956 "method": "sock_impl_set_options", 00:19:59.956 "params": { 00:19:59.956 "impl_name": "posix", 00:19:59.956 "recv_buf_size": 2097152, 00:19:59.956 "send_buf_size": 2097152, 00:19:59.956 "enable_recv_pipe": true, 00:19:59.956 "enable_quickack": false, 00:19:59.956 "enable_placement_id": 0, 00:19:59.956 "enable_zerocopy_send_server": true, 00:19:59.956 "enable_zerocopy_send_client": false, 00:19:59.956 "zerocopy_threshold": 0, 00:19:59.956 "tls_version": 0, 00:19:59.956 "enable_ktls": false 00:19:59.956 } 00:19:59.956 } 00:19:59.956 ] 00:19:59.956 }, 00:19:59.956 { 00:19:59.956 "subsystem": "vmd", 00:19:59.956 "config": [] 00:19:59.956 }, 00:19:59.956 { 00:19:59.956 "subsystem": "accel", 00:19:59.956 "config": [ 00:19:59.956 { 00:19:59.956 "method": "accel_set_options", 00:19:59.956 "params": { 00:19:59.956 "small_cache_size": 128, 00:19:59.956 "large_cache_size": 16, 00:19:59.956 "task_count": 2048, 00:19:59.956 "sequence_count": 2048, 00:19:59.956 "buf_count": 2048 00:19:59.956 } 00:19:59.956 } 00:19:59.956 ] 00:19:59.956 }, 00:19:59.956 { 00:19:59.956 "subsystem": "bdev", 00:19:59.956 "config": [ 00:19:59.956 { 00:19:59.956 "method": "bdev_set_options", 00:19:59.956 "params": { 00:19:59.956 "bdev_io_pool_size": 65535, 00:19:59.956 "bdev_io_cache_size": 256, 00:19:59.956 "bdev_auto_examine": true, 00:19:59.956 "iobuf_small_cache_size": 128, 00:19:59.956 "iobuf_large_cache_size": 16 00:19:59.956 } 00:19:59.956 }, 00:19:59.956 { 00:19:59.956 "method": "bdev_raid_set_options", 00:19:59.956 "params": { 00:19:59.956 "process_window_size_kb": 1024, 00:19:59.956 "process_max_bandwidth_mb_sec": 0 00:19:59.956 } 00:19:59.956 }, 00:19:59.956 { 00:19:59.956 "method": "bdev_iscsi_set_options", 00:19:59.956 "params": { 00:19:59.956 "timeout_sec": 30 00:19:59.956 } 00:19:59.956 }, 00:19:59.956 { 00:19:59.956 "method": "bdev_nvme_set_options", 00:19:59.956 "params": { 00:19:59.956 "action_on_timeout": "none", 00:19:59.956 "timeout_us": 0, 00:19:59.956 "timeout_admin_us": 0, 00:19:59.956 "keep_alive_timeout_ms": 10000, 00:19:59.956 "arbitration_burst": 0, 00:19:59.956 "low_priority_weight": 0, 00:19:59.956 "medium_priority_weight": 0, 00:19:59.956 "high_priority_weight": 0, 00:19:59.956 "nvme_adminq_poll_period_us": 10000, 00:19:59.956 "nvme_ioq_poll_period_us": 0, 00:19:59.956 "io_queue_requests": 0, 00:19:59.956 "delay_cmd_submit": true, 00:19:59.956 "transport_retry_count": 4, 00:19:59.956 "bdev_retry_count": 3, 00:19:59.956 "transport_ack_timeout": 0, 00:19:59.956 "ctrlr_loss_timeout_sec": 0, 00:19:59.956 "reconnect_delay_sec": 0, 00:19:59.956 "fast_io_fail_timeout_sec": 0, 00:19:59.956 "disable_auto_failback": false, 00:19:59.956 "generate_uuids": false, 00:19:59.956 "transport_tos": 0, 00:19:59.956 "nvme_error_stat": false, 00:19:59.956 "rdma_srq_size": 0, 00:19:59.956 "io_path_stat": false, 00:19:59.956 "allow_accel_sequence": false, 00:19:59.957 "rdma_max_cq_size": 0, 00:19:59.957 "rdma_cm_event_timeout_ms": 0, 00:19:59.957 "dhchap_digests": [ 00:19:59.957 "sha256", 00:19:59.957 "sha384", 00:19:59.957 "sha512" 00:19:59.957 ], 00:19:59.957 "dhchap_dhgroups": [ 00:19:59.957 "null", 00:19:59.957 "ffdhe2048", 00:19:59.957 "ffdhe3072", 00:19:59.957 "ffdhe4096", 00:19:59.957 "ffdhe6144", 00:19:59.957 "ffdhe8192" 00:19:59.957 ] 00:19:59.957 } 00:19:59.957 }, 00:19:59.957 { 00:19:59.957 "method": "bdev_nvme_set_hotplug", 00:19:59.957 "params": { 00:19:59.957 "period_us": 100000, 00:19:59.957 "enable": false 00:19:59.957 } 00:19:59.957 }, 00:19:59.957 { 00:19:59.957 "method": "bdev_malloc_create", 00:19:59.957 "params": { 00:19:59.957 "name": "malloc0", 00:19:59.957 "num_blocks": 8192, 00:19:59.957 "block_size": 4096, 00:19:59.957 "physical_block_size": 4096, 00:19:59.957 "uuid": "c9183347-72da-47b7-b41a-06b16d0c4088", 00:19:59.957 "optimal_io_boundary": 0, 00:19:59.957 "md_size": 0, 00:19:59.957 "dif_type": 0, 00:19:59.957 "dif_is_head_of_md": false, 00:19:59.957 "dif_pi_format": 0 00:19:59.957 } 00:19:59.957 }, 00:19:59.957 { 00:19:59.957 "method": "bdev_wait_for_examine" 00:19:59.957 } 00:19:59.957 ] 00:19:59.957 }, 00:19:59.957 { 00:19:59.957 "subsystem": "nbd", 00:19:59.957 "config": [] 00:19:59.957 }, 00:19:59.957 { 00:19:59.957 "subsystem": "scheduler", 00:19:59.957 "config": [ 00:19:59.957 { 00:19:59.957 "method": "framework_set_scheduler", 00:19:59.957 "params": { 00:19:59.957 "name": "static" 00:19:59.957 } 00:19:59.957 } 00:19:59.957 ] 00:19:59.957 }, 00:19:59.957 { 00:19:59.957 "subsystem": "nvmf", 00:19:59.957 "config": [ 00:19:59.957 { 00:19:59.957 "method": "nvmf_set_config", 00:19:59.957 "params": { 00:19:59.957 "discovery_filter": "match_any", 00:19:59.957 "admin_cmd_passthru": { 00:19:59.957 "identify_ctrlr": false 00:19:59.957 }, 00:19:59.957 "dhchap_digests": [ 00:19:59.957 "sha256", 00:19:59.957 "sha384", 00:19:59.957 "sha512" 00:19:59.957 ], 00:19:59.957 "dhchap_dhgroups": [ 00:19:59.957 "null", 00:19:59.957 "ffdhe2048", 00:19:59.957 "ffdhe3072", 00:19:59.957 "ffdhe4096", 00:19:59.957 "ffdhe6144", 00:19:59.957 "ffdhe8192" 00:19:59.957 ] 00:19:59.957 } 00:19:59.957 }, 00:19:59.957 { 00:19:59.957 "method": "nvmf_set_max_subsystems", 00:19:59.957 "params": { 00:19:59.957 "max_subsystems": 1024 00:19:59.957 } 00:19:59.957 }, 00:19:59.957 { 00:19:59.957 "method": "nvmf_set_crdt", 00:19:59.957 "params": { 00:19:59.957 "crdt1": 0, 00:19:59.957 "crdt2": 0, 00:19:59.957 "crdt3": 0 00:19:59.957 } 00:19:59.957 }, 00:19:59.957 { 00:19:59.957 "method": "nvmf_create_transport", 00:19:59.957 "params": { 00:19:59.957 "trtype": "TCP", 00:19:59.957 "max_queue_depth": 128, 00:19:59.957 "max_io_qpairs_per_ctrlr": 127, 00:19:59.957 "in_capsule_data_size": 4096, 00:19:59.957 "max_io_size": 131072, 00:19:59.957 "io_unit_size": 131072, 00:19:59.957 "max_aq_depth": 128, 00:19:59.957 "num_shared_buffers": 511, 00:19:59.957 "buf_cache_size": 4294967295, 00:19:59.957 "dif_insert_or_strip": false, 00:19:59.957 "zcopy": false, 00:19:59.957 "c2h_success": false, 00:19:59.957 "sock_priority": 0, 00:19:59.957 "abort_timeout_sec": 1, 00:19:59.957 "ack_timeout": 0, 00:19:59.957 "data_wr_pool_size": 0 00:19:59.957 } 00:19:59.957 }, 00:19:59.957 { 00:19:59.957 "method": "nvmf_create_subsystem", 00:19:59.957 "params": { 00:19:59.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.957 "allow_any_host": false, 00:19:59.957 "serial_number": "00000000000000000000", 00:19:59.957 "model_number": "SPDK bdev Controller", 00:19:59.957 "max_namespaces": 32, 00:19:59.957 "min_cntlid": 1, 00:19:59.957 "max_cntlid": 65519, 00:19:59.957 "ana_reporting": false 00:19:59.957 } 00:19:59.957 }, 00:19:59.957 { 00:19:59.957 "method": "nvmf_subsystem_add_host", 00:19:59.957 "params": { 00:19:59.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.957 "host": "nqn.2016-06.io.spdk:host1", 00:19:59.957 "psk": "key0" 00:19:59.957 } 00:19:59.957 }, 00:19:59.957 { 00:19:59.957 "method": "nvmf_subsystem_add_ns", 00:19:59.957 "params": { 00:19:59.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.957 "namespace": { 00:19:59.957 "nsid": 1, 00:19:59.957 "bdev_name": "malloc0", 00:19:59.957 "nguid": "C918334772DA47B7B41A06B16D0C4088", 00:19:59.957 "uuid": "c9183347-72da-47b7-b41a-06b16d0c4088", 00:19:59.957 "no_auto_visible": false 00:19:59.957 } 00:19:59.957 } 00:19:59.957 }, 00:19:59.957 { 00:19:59.957 "method": "nvmf_subsystem_add_listener", 00:19:59.957 "params": { 00:19:59.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.957 "listen_address": { 00:19:59.957 "trtype": "TCP", 00:19:59.957 "adrfam": "IPv4", 00:19:59.957 "traddr": "10.0.0.2", 00:19:59.957 "trsvcid": "4420" 00:19:59.957 }, 00:19:59.957 "secure_channel": false, 00:19:59.957 "sock_impl": "ssl" 00:19:59.957 } 00:19:59.957 } 00:19:59.957 ] 00:19:59.957 } 00:19:59.957 ] 00:19:59.957 }' 00:19:59.957 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:00.217 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:00.217 "subsystems": [ 00:20:00.217 { 00:20:00.217 "subsystem": "keyring", 00:20:00.217 "config": [ 00:20:00.217 { 00:20:00.217 "method": "keyring_file_add_key", 00:20:00.217 "params": { 00:20:00.217 "name": "key0", 00:20:00.217 "path": "/tmp/tmp.oAhvjHIcuA" 00:20:00.217 } 00:20:00.217 } 00:20:00.217 ] 00:20:00.217 }, 00:20:00.217 { 00:20:00.217 "subsystem": "iobuf", 00:20:00.217 "config": [ 00:20:00.217 { 00:20:00.217 "method": "iobuf_set_options", 00:20:00.217 "params": { 00:20:00.217 "small_pool_count": 8192, 00:20:00.217 "large_pool_count": 1024, 00:20:00.217 "small_bufsize": 8192, 00:20:00.217 "large_bufsize": 135168 00:20:00.217 } 00:20:00.217 } 00:20:00.217 ] 00:20:00.217 }, 00:20:00.217 { 00:20:00.217 "subsystem": "sock", 00:20:00.217 "config": [ 00:20:00.217 { 00:20:00.217 "method": "sock_set_default_impl", 00:20:00.217 "params": { 00:20:00.217 "impl_name": "posix" 00:20:00.217 } 00:20:00.217 }, 00:20:00.217 { 00:20:00.217 "method": "sock_impl_set_options", 00:20:00.217 "params": { 00:20:00.217 "impl_name": "ssl", 00:20:00.217 "recv_buf_size": 4096, 00:20:00.217 "send_buf_size": 4096, 00:20:00.217 "enable_recv_pipe": true, 00:20:00.217 "enable_quickack": false, 00:20:00.217 "enable_placement_id": 0, 00:20:00.217 "enable_zerocopy_send_server": true, 00:20:00.217 "enable_zerocopy_send_client": false, 00:20:00.217 "zerocopy_threshold": 0, 00:20:00.217 "tls_version": 0, 00:20:00.217 "enable_ktls": false 00:20:00.217 } 00:20:00.217 }, 00:20:00.217 { 00:20:00.217 "method": "sock_impl_set_options", 00:20:00.217 "params": { 00:20:00.217 "impl_name": "posix", 00:20:00.217 "recv_buf_size": 2097152, 00:20:00.217 "send_buf_size": 2097152, 00:20:00.217 "enable_recv_pipe": true, 00:20:00.217 "enable_quickack": false, 00:20:00.217 "enable_placement_id": 0, 00:20:00.217 "enable_zerocopy_send_server": true, 00:20:00.217 "enable_zerocopy_send_client": false, 00:20:00.217 "zerocopy_threshold": 0, 00:20:00.217 "tls_version": 0, 00:20:00.217 "enable_ktls": false 00:20:00.217 } 00:20:00.217 } 00:20:00.217 ] 00:20:00.217 }, 00:20:00.217 { 00:20:00.217 "subsystem": "vmd", 00:20:00.217 "config": [] 00:20:00.217 }, 00:20:00.217 { 00:20:00.217 "subsystem": "accel", 00:20:00.217 "config": [ 00:20:00.217 { 00:20:00.217 "method": "accel_set_options", 00:20:00.217 "params": { 00:20:00.217 "small_cache_size": 128, 00:20:00.217 "large_cache_size": 16, 00:20:00.217 "task_count": 2048, 00:20:00.217 "sequence_count": 2048, 00:20:00.217 "buf_count": 2048 00:20:00.217 } 00:20:00.217 } 00:20:00.217 ] 00:20:00.217 }, 00:20:00.217 { 00:20:00.217 "subsystem": "bdev", 00:20:00.217 "config": [ 00:20:00.217 { 00:20:00.217 "method": "bdev_set_options", 00:20:00.217 "params": { 00:20:00.217 "bdev_io_pool_size": 65535, 00:20:00.217 "bdev_io_cache_size": 256, 00:20:00.217 "bdev_auto_examine": true, 00:20:00.217 "iobuf_small_cache_size": 128, 00:20:00.217 "iobuf_large_cache_size": 16 00:20:00.217 } 00:20:00.217 }, 00:20:00.217 { 00:20:00.217 "method": "bdev_raid_set_options", 00:20:00.217 "params": { 00:20:00.217 "process_window_size_kb": 1024, 00:20:00.217 "process_max_bandwidth_mb_sec": 0 00:20:00.217 } 00:20:00.217 }, 00:20:00.217 { 00:20:00.217 "method": "bdev_iscsi_set_options", 00:20:00.217 "params": { 00:20:00.217 "timeout_sec": 30 00:20:00.217 } 00:20:00.217 }, 00:20:00.217 { 00:20:00.217 "method": "bdev_nvme_set_options", 00:20:00.217 "params": { 00:20:00.217 "action_on_timeout": "none", 00:20:00.217 "timeout_us": 0, 00:20:00.217 "timeout_admin_us": 0, 00:20:00.217 "keep_alive_timeout_ms": 10000, 00:20:00.217 "arbitration_burst": 0, 00:20:00.217 "low_priority_weight": 0, 00:20:00.217 "medium_priority_weight": 0, 00:20:00.217 "high_priority_weight": 0, 00:20:00.217 "nvme_adminq_poll_period_us": 10000, 00:20:00.217 "nvme_ioq_poll_period_us": 0, 00:20:00.217 "io_queue_requests": 512, 00:20:00.217 "delay_cmd_submit": true, 00:20:00.217 "transport_retry_count": 4, 00:20:00.217 "bdev_retry_count": 3, 00:20:00.217 "transport_ack_timeout": 0, 00:20:00.217 "ctrlr_loss_timeout_sec": 0, 00:20:00.217 "reconnect_delay_sec": 0, 00:20:00.217 "fast_io_fail_timeout_sec": 0, 00:20:00.217 "disable_auto_failback": false, 00:20:00.217 "generate_uuids": false, 00:20:00.217 "transport_tos": 0, 00:20:00.217 "nvme_error_stat": false, 00:20:00.217 "rdma_srq_size": 0, 00:20:00.217 "io_path_stat": false, 00:20:00.217 "allow_accel_sequence": false, 00:20:00.217 "rdma_max_cq_size": 0, 00:20:00.217 "rdma_cm_event_timeout_ms": 0, 00:20:00.217 "dhchap_digests": [ 00:20:00.217 "sha256", 00:20:00.217 "sha384", 00:20:00.217 "sha512" 00:20:00.217 ], 00:20:00.217 "dhchap_dhgroups": [ 00:20:00.217 "null", 00:20:00.217 "ffdhe2048", 00:20:00.217 "ffdhe3072", 00:20:00.217 "ffdhe4096", 00:20:00.217 "ffdhe6144", 00:20:00.217 "ffdhe8192" 00:20:00.217 ] 00:20:00.217 } 00:20:00.217 }, 00:20:00.217 { 00:20:00.217 "method": "bdev_nvme_attach_controller", 00:20:00.217 "params": { 00:20:00.217 "name": "nvme0", 00:20:00.217 "trtype": "TCP", 00:20:00.217 "adrfam": "IPv4", 00:20:00.217 "traddr": "10.0.0.2", 00:20:00.217 "trsvcid": "4420", 00:20:00.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.217 "prchk_reftag": false, 00:20:00.217 "prchk_guard": false, 00:20:00.217 "ctrlr_loss_timeout_sec": 0, 00:20:00.217 "reconnect_delay_sec": 0, 00:20:00.217 "fast_io_fail_timeout_sec": 0, 00:20:00.217 "psk": "key0", 00:20:00.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.217 "hdgst": false, 00:20:00.217 "ddgst": false, 00:20:00.217 "multipath": "multipath" 00:20:00.217 } 00:20:00.217 }, 00:20:00.217 { 00:20:00.217 "method": "bdev_nvme_set_hotplug", 00:20:00.217 "params": { 00:20:00.217 "period_us": 100000, 00:20:00.217 "enable": false 00:20:00.217 } 00:20:00.217 }, 00:20:00.217 { 00:20:00.217 "method": "bdev_enable_histogram", 00:20:00.217 "params": { 00:20:00.217 "name": "nvme0n1", 00:20:00.217 "enable": true 00:20:00.217 } 00:20:00.217 }, 00:20:00.217 { 00:20:00.217 "method": "bdev_wait_for_examine" 00:20:00.217 } 00:20:00.217 ] 00:20:00.217 }, 00:20:00.217 { 00:20:00.217 "subsystem": "nbd", 00:20:00.217 "config": [] 00:20:00.217 } 00:20:00.217 ] 00:20:00.217 }' 00:20:00.218 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 448289 00:20:00.218 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 448289 ']' 00:20:00.218 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 448289 00:20:00.218 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:00.218 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:00.218 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 448289 00:20:00.218 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:00.218 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:00.218 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 448289' 00:20:00.218 killing process with pid 448289 00:20:00.218 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 448289 00:20:00.218 Received shutdown signal, test time was about 1.000000 seconds 00:20:00.218 00:20:00.218 Latency(us) 00:20:00.218 [2024-10-08T16:27:53.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.218 [2024-10-08T16:27:53.541Z] =================================================================================================================== 00:20:00.218 [2024-10-08T16:27:53.541Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.218 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 448289 00:20:00.477 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 448044 00:20:00.477 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 448044 ']' 00:20:00.477 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 448044 00:20:00.477 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:00.477 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:00.477 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 448044 00:20:00.477 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:00.477 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:00.477 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 448044' 00:20:00.477 killing process with pid 448044 00:20:00.477 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 448044 00:20:00.477 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 448044 00:20:00.736 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:00.736 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:00.736 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:00.736 "subsystems": [ 00:20:00.736 { 00:20:00.736 "subsystem": "keyring", 00:20:00.736 "config": [ 00:20:00.736 { 00:20:00.736 "method": "keyring_file_add_key", 00:20:00.736 "params": { 00:20:00.736 "name": "key0", 00:20:00.736 "path": "/tmp/tmp.oAhvjHIcuA" 00:20:00.736 } 00:20:00.736 } 00:20:00.736 ] 00:20:00.736 }, 00:20:00.736 { 00:20:00.736 "subsystem": "iobuf", 00:20:00.736 "config": [ 00:20:00.736 { 00:20:00.736 "method": "iobuf_set_options", 00:20:00.736 "params": { 00:20:00.736 "small_pool_count": 8192, 00:20:00.736 "large_pool_count": 1024, 00:20:00.736 "small_bufsize": 8192, 00:20:00.736 "large_bufsize": 135168 00:20:00.736 } 00:20:00.736 } 00:20:00.736 ] 00:20:00.736 }, 00:20:00.736 { 00:20:00.736 "subsystem": "sock", 00:20:00.736 "config": [ 00:20:00.736 { 00:20:00.736 "method": "sock_set_default_impl", 00:20:00.736 "params": { 00:20:00.736 "impl_name": "posix" 00:20:00.736 } 00:20:00.736 }, 00:20:00.736 { 00:20:00.736 "method": "sock_impl_set_options", 00:20:00.736 "params": { 00:20:00.736 "impl_name": "ssl", 00:20:00.736 "recv_buf_size": 4096, 00:20:00.736 "send_buf_size": 4096, 00:20:00.736 "enable_recv_pipe": true, 00:20:00.736 "enable_quickack": false, 00:20:00.736 "enable_placement_id": 0, 00:20:00.736 "enable_zerocopy_send_server": true, 00:20:00.736 "enable_zerocopy_send_client": false, 00:20:00.736 "zerocopy_threshold": 0, 00:20:00.736 "tls_version": 0, 00:20:00.736 "enable_ktls": false 00:20:00.736 } 00:20:00.736 }, 00:20:00.736 { 00:20:00.736 "method": "sock_impl_set_options", 00:20:00.736 "params": { 00:20:00.736 "impl_name": "posix", 00:20:00.736 "recv_buf_size": 2097152, 00:20:00.736 "send_buf_size": 2097152, 00:20:00.736 "enable_recv_pipe": true, 00:20:00.736 "enable_quickack": false, 00:20:00.736 "enable_placement_id": 0, 00:20:00.736 "enable_zerocopy_send_server": true, 00:20:00.736 "enable_zerocopy_send_client": false, 00:20:00.736 "zerocopy_threshold": 0, 00:20:00.736 "tls_version": 0, 00:20:00.736 "enable_ktls": false 00:20:00.736 } 00:20:00.736 } 00:20:00.736 ] 00:20:00.736 }, 00:20:00.736 { 00:20:00.736 "subsystem": "vmd", 00:20:00.736 "config": [] 00:20:00.736 }, 00:20:00.736 { 00:20:00.736 "subsystem": "accel", 00:20:00.736 "config": [ 00:20:00.736 { 00:20:00.736 "method": "accel_set_options", 00:20:00.736 "params": { 00:20:00.736 "small_cache_size": 128, 00:20:00.736 "large_cache_size": 16, 00:20:00.736 "task_count": 2048, 00:20:00.736 "sequence_count": 2048, 00:20:00.736 "buf_count": 2048 00:20:00.736 } 00:20:00.736 } 00:20:00.736 ] 00:20:00.736 }, 00:20:00.736 { 00:20:00.736 "subsystem": "bdev", 00:20:00.736 "config": [ 00:20:00.736 { 00:20:00.736 "method": "bdev_set_options", 00:20:00.736 "params": { 00:20:00.736 "bdev_io_pool_size": 65535, 00:20:00.736 "bdev_io_cache_size": 256, 00:20:00.736 "bdev_auto_examine": true, 00:20:00.736 "iobuf_small_cache_size": 128, 00:20:00.736 "iobuf_large_cache_size": 16 00:20:00.736 } 00:20:00.736 }, 00:20:00.736 { 00:20:00.736 "method": "bdev_raid_set_options", 00:20:00.736 "params": { 00:20:00.736 "process_window_size_kb": 1024, 00:20:00.736 "process_max_bandwidth_mb_sec": 0 00:20:00.736 } 00:20:00.736 }, 00:20:00.736 { 00:20:00.736 "method": "bdev_iscsi_set_options", 00:20:00.736 "params": { 00:20:00.736 "timeout_sec": 30 00:20:00.736 } 00:20:00.736 }, 00:20:00.736 { 00:20:00.736 "method": "bdev_nvme_set_options", 00:20:00.736 "params": { 00:20:00.736 "action_on_timeout": "none", 00:20:00.736 "timeout_us": 0, 00:20:00.736 "timeout_admin_us": 0, 00:20:00.736 "keep_alive_timeout_ms": 10000, 00:20:00.736 "arbitration_burst": 0, 00:20:00.736 "low_priority_weight": 0, 00:20:00.736 "medium_priority_weight": 0, 00:20:00.736 "high_priority_weight": 0, 00:20:00.736 "nvme_adminq_poll_period_us": 10000, 00:20:00.736 "nvme_ioq_poll_period_us": 0, 00:20:00.736 "io_queue_requests": 0, 00:20:00.736 "delay_cmd_submit": true, 00:20:00.736 "transport_retry_count": 4, 00:20:00.736 "bdev_retry_count": 3, 00:20:00.736 "transport_ack_timeout": 0, 00:20:00.736 "ctrlr_loss_timeout_sec": 0, 00:20:00.736 "reconnect_delay_sec": 0, 00:20:00.736 "fast_io_fail_timeout_sec": 0, 00:20:00.736 "disable_auto_failback": false, 00:20:00.736 "generate_uuids": false, 00:20:00.736 "transport_tos": 0, 00:20:00.736 "nvme_error_stat": false, 00:20:00.736 "rdma_srq_size": 0, 00:20:00.736 "io_path_stat": false, 00:20:00.736 "allow_accel_sequence": false, 00:20:00.736 "rdma_max_cq_size": 0, 00:20:00.736 "rdma_cm_event_timeout_ms": 0, 00:20:00.736 "dhchap_digests": [ 00:20:00.736 "sha256", 00:20:00.736 "sha384", 00:20:00.736 "sha512" 00:20:00.736 ], 00:20:00.736 "dhchap_dhgroups": [ 00:20:00.736 "null", 00:20:00.736 "ffdhe2048", 00:20:00.736 "ffdhe3072", 00:20:00.736 "ffdhe4096", 00:20:00.736 "ffdhe6144", 00:20:00.736 "ffdhe8192" 00:20:00.736 ] 00:20:00.736 } 00:20:00.736 }, 00:20:00.736 { 00:20:00.736 "method": "bdev_nvme_set_hotplug", 00:20:00.736 "params": { 00:20:00.736 "period_us": 100000, 00:20:00.736 "enable": false 00:20:00.736 } 00:20:00.736 }, 00:20:00.736 { 00:20:00.736 "method": "bdev_malloc_create", 00:20:00.737 "params": { 00:20:00.737 "name": "malloc0", 00:20:00.737 "num_blocks": 8192, 00:20:00.737 "block_size": 4096, 00:20:00.737 "physical_block_size": 4096, 00:20:00.737 "uuid": "c9183347-72da-47b7-b41a-06b16d0c4088", 00:20:00.737 "optimal_io_boundary": 0, 00:20:00.737 "md_size": 0, 00:20:00.737 "dif_type": 0, 00:20:00.737 "dif_is_head_of_md": false, 00:20:00.737 "dif_pi_format": 0 00:20:00.737 } 00:20:00.737 }, 00:20:00.737 { 00:20:00.737 "method": "bdev_wait_for_examine" 00:20:00.737 } 00:20:00.737 ] 00:20:00.737 }, 00:20:00.737 { 00:20:00.737 "subsystem": "nbd", 00:20:00.737 "config": [] 00:20:00.737 }, 00:20:00.737 { 00:20:00.737 "subsystem": "scheduler", 00:20:00.737 "config": [ 00:20:00.737 { 00:20:00.737 "method": "framework_set_scheduler", 00:20:00.737 "params": { 00:20:00.737 "name": "static" 00:20:00.737 } 00:20:00.737 } 00:20:00.737 ] 00:20:00.737 }, 00:20:00.737 { 00:20:00.737 "subsystem": "nvmf", 00:20:00.737 "config": [ 00:20:00.737 { 00:20:00.737 "method": "nvmf_set_config", 00:20:00.737 "params": { 00:20:00.737 "discovery_filter": "match_any", 00:20:00.737 "admin_cmd_passthru": { 00:20:00.737 "identify_ctrlr": false 00:20:00.737 }, 00:20:00.737 "dhchap_digests": [ 00:20:00.737 "sha256", 00:20:00.737 "sha384", 00:20:00.737 "sha512" 00:20:00.737 ], 00:20:00.737 "dhchap_dhgroups": [ 00:20:00.737 "null", 00:20:00.737 "ffdhe2048", 00:20:00.737 "ffdhe3072", 00:20:00.737 "ffdhe4096", 00:20:00.737 "ffdhe6144", 00:20:00.737 "ffdhe8192" 00:20:00.737 ] 00:20:00.737 } 00:20:00.737 }, 00:20:00.737 { 00:20:00.737 "method": "nvmf_set_max_subsystems", 00:20:00.737 "params": { 00:20:00.737 "max_subsystems": 1024 00:20:00.737 } 00:20:00.737 }, 00:20:00.737 { 00:20:00.737 "method": "nvmf_set_crdt", 00:20:00.737 "params": { 00:20:00.737 "crdt1": 0, 00:20:00.737 "crdt2": 0, 00:20:00.737 "crdt3": 0 00:20:00.737 } 00:20:00.737 }, 00:20:00.737 { 00:20:00.737 "method": "nvmf_create_transport", 00:20:00.737 "params": { 00:20:00.737 "trtype": "TCP", 00:20:00.737 "max_queue_depth": 128, 00:20:00.737 "max_io_qpairs_per_ctrlr": 127, 00:20:00.737 "in_capsule_data_size": 4096, 00:20:00.737 "max_io_size": 131072, 00:20:00.737 "io_unit_size": 131072, 00:20:00.737 "max_aq_depth": 128, 00:20:00.737 "num_shared_buffers": 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:00.737 511, 00:20:00.737 "buf_cache_size": 4294967295, 00:20:00.737 "dif_insert_or_strip": false, 00:20:00.737 "zcopy": false, 00:20:00.737 "c2h_success": false, 00:20:00.737 "sock_priority": 0, 00:20:00.737 "abort_timeout_sec": 1, 00:20:00.737 "ack_timeout": 0, 00:20:00.737 "data_wr_pool_size": 0 00:20:00.737 } 00:20:00.737 }, 00:20:00.737 { 00:20:00.737 "method": "nvmf_create_subsystem", 00:20:00.737 "params": { 00:20:00.737 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.737 "allow_any_host": false, 00:20:00.737 "serial_number": "00000000000000000000", 00:20:00.737 "model_number": "SPDK bdev Controller", 00:20:00.737 "max_namespaces": 32, 00:20:00.737 "min_cntlid": 1, 00:20:00.737 "max_cntlid": 65519, 00:20:00.737 "ana_reporting": false 00:20:00.737 } 00:20:00.737 }, 00:20:00.737 { 00:20:00.737 "method": "nvmf_subsystem_add_host", 00:20:00.737 "params": { 00:20:00.737 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.737 "host": "nqn.2016-06.io.spdk:host1", 00:20:00.737 "psk": "key0" 00:20:00.737 } 00:20:00.737 }, 00:20:00.737 { 00:20:00.737 "method": "nvmf_subsystem_add_ns", 00:20:00.737 "params": { 00:20:00.737 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.737 "namespace": { 00:20:00.737 "nsid": 1, 00:20:00.737 "bdev_name": "malloc0", 00:20:00.737 "nguid": "C918334772DA47B7B41A06B16D0C4088", 00:20:00.737 "uuid": "c9183347-72da-47b7-b41a-06b16d0c4088", 00:20:00.737 "no_auto_visible": false 00:20:00.737 } 00:20:00.737 } 00:20:00.737 }, 00:20:00.737 { 00:20:00.737 "method": "nvmf_subsystem_add_listener", 00:20:00.737 "params": { 00:20:00.737 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.737 "listen_address": { 00:20:00.737 "trtype": "TCP", 00:20:00.737 "adrfam": "IPv4", 00:20:00.737 "traddr": "10.0.0.2", 00:20:00.737 "trsvcid": "4420" 00:20:00.737 }, 00:20:00.737 "secure_channel": false, 00:20:00.737 "sock_impl": "ssl" 00:20:00.737 } 00:20:00.737 } 00:20:00.737 ] 00:20:00.737 } 00:20:00.737 ] 00:20:00.737 }' 00:20:00.737 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.737 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=448772 00:20:00.737 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:00.737 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 448772 00:20:00.737 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 448772 ']' 00:20:00.737 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.737 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:00.737 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.737 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:00.737 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.737 [2024-10-08 18:27:53.938838] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:20:00.737 [2024-10-08 18:27:53.938880] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.737 [2024-10-08 18:27:54.010231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.006 [2024-10-08 18:27:54.086727] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.006 [2024-10-08 18:27:54.086768] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.006 [2024-10-08 18:27:54.086775] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.006 [2024-10-08 18:27:54.086781] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.006 [2024-10-08 18:27:54.086786] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.006 [2024-10-08 18:27:54.087350] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.006 [2024-10-08 18:27:54.321277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.270 [2024-10-08 18:27:54.353166] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.270 [2024-10-08 18:27:54.353394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.529 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.529 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:01.530 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:01.530 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:01.530 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.530 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.530 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=449017 00:20:01.530 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 449017 /var/tmp/bdevperf.sock 00:20:01.530 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 449017 ']' 00:20:01.530 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.530 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:01.530 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.530 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.530 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:01.530 "subsystems": [ 00:20:01.530 { 00:20:01.530 "subsystem": "keyring", 00:20:01.530 "config": [ 00:20:01.530 { 00:20:01.530 "method": "keyring_file_add_key", 00:20:01.530 "params": { 00:20:01.530 "name": "key0", 00:20:01.530 "path": "/tmp/tmp.oAhvjHIcuA" 00:20:01.530 } 00:20:01.530 } 00:20:01.530 ] 00:20:01.530 }, 00:20:01.530 { 00:20:01.530 "subsystem": "iobuf", 00:20:01.530 "config": [ 00:20:01.530 { 00:20:01.530 "method": "iobuf_set_options", 00:20:01.530 "params": { 00:20:01.530 "small_pool_count": 8192, 00:20:01.530 "large_pool_count": 1024, 00:20:01.530 "small_bufsize": 8192, 00:20:01.530 "large_bufsize": 135168 00:20:01.530 } 00:20:01.530 } 00:20:01.530 ] 00:20:01.530 }, 00:20:01.530 { 00:20:01.530 "subsystem": "sock", 00:20:01.530 "config": [ 00:20:01.530 { 00:20:01.530 "method": "sock_set_default_impl", 00:20:01.530 "params": { 00:20:01.530 "impl_name": "posix" 00:20:01.530 } 00:20:01.530 }, 00:20:01.530 { 00:20:01.530 "method": "sock_impl_set_options", 00:20:01.530 "params": { 00:20:01.530 "impl_name": "ssl", 00:20:01.530 "recv_buf_size": 4096, 00:20:01.530 "send_buf_size": 4096, 00:20:01.530 "enable_recv_pipe": true, 00:20:01.530 "enable_quickack": false, 00:20:01.530 "enable_placement_id": 0, 00:20:01.530 "enable_zerocopy_send_server": true, 00:20:01.530 "enable_zerocopy_send_client": false, 00:20:01.530 "zerocopy_threshold": 0, 00:20:01.530 "tls_version": 0, 00:20:01.530 "enable_ktls": false 00:20:01.530 } 00:20:01.530 }, 00:20:01.530 { 00:20:01.530 "method": "sock_impl_set_options", 00:20:01.530 "params": { 00:20:01.530 "impl_name": "posix", 00:20:01.530 "recv_buf_size": 2097152, 00:20:01.530 "send_buf_size": 2097152, 00:20:01.530 "enable_recv_pipe": true, 00:20:01.530 "enable_quickack": false, 00:20:01.530 "enable_placement_id": 0, 00:20:01.530 "enable_zerocopy_send_server": true, 00:20:01.530 "enable_zerocopy_send_client": false, 00:20:01.530 "zerocopy_threshold": 0, 00:20:01.530 "tls_version": 0, 00:20:01.530 "enable_ktls": false 00:20:01.530 } 00:20:01.530 } 00:20:01.530 ] 00:20:01.530 }, 00:20:01.530 { 00:20:01.530 "subsystem": "vmd", 00:20:01.530 "config": [] 00:20:01.530 }, 00:20:01.530 { 00:20:01.530 "subsystem": "accel", 00:20:01.530 "config": [ 00:20:01.530 { 00:20:01.530 "method": "accel_set_options", 00:20:01.530 "params": { 00:20:01.530 "small_cache_size": 128, 00:20:01.530 "large_cache_size": 16, 00:20:01.530 "task_count": 2048, 00:20:01.530 "sequence_count": 2048, 00:20:01.530 "buf_count": 2048 00:20:01.530 } 00:20:01.530 } 00:20:01.530 ] 00:20:01.530 }, 00:20:01.530 { 00:20:01.530 "subsystem": "bdev", 00:20:01.530 "config": [ 00:20:01.530 { 00:20:01.530 "method": "bdev_set_options", 00:20:01.530 "params": { 00:20:01.530 "bdev_io_pool_size": 65535, 00:20:01.530 "bdev_io_cache_size": 256, 00:20:01.530 "bdev_auto_examine": true, 00:20:01.530 "iobuf_small_cache_size": 128, 00:20:01.530 "iobuf_large_cache_size": 16 00:20:01.530 } 00:20:01.530 }, 00:20:01.530 { 00:20:01.530 "method": "bdev_raid_set_options", 00:20:01.530 "params": { 00:20:01.530 "process_window_size_kb": 1024, 00:20:01.530 "process_max_bandwidth_mb_sec": 0 00:20:01.530 } 00:20:01.530 }, 00:20:01.530 { 00:20:01.530 "method": "bdev_iscsi_set_options", 00:20:01.530 "params": { 00:20:01.530 "timeout_sec": 30 00:20:01.530 } 00:20:01.530 }, 00:20:01.530 { 00:20:01.530 "method": "bdev_nvme_set_options", 00:20:01.530 "params": { 00:20:01.530 "action_on_timeout": "none", 00:20:01.530 "timeout_us": 0, 00:20:01.530 "timeout_admin_us": 0, 00:20:01.530 "keep_alive_timeout_ms": 10000, 00:20:01.530 "arbitration_burst": 0, 00:20:01.530 "low_priority_weight": 0, 00:20:01.530 "medium_priority_weight": 0, 00:20:01.530 "high_priority_weight": 0, 00:20:01.530 "nvme_adminq_poll_period_us": 10000, 00:20:01.530 "nvme_ioq_poll_period_us": 0, 00:20:01.530 "io_queue_requests": 512, 00:20:01.530 "delay_cmd_submit": true, 00:20:01.530 "transport_retry_count": 4, 00:20:01.530 "bdev_retry_count": 3, 00:20:01.530 "transport_ack_timeout": 0, 00:20:01.530 "ctrlr_loss_timeout_sec": 0, 00:20:01.530 "reconnect_delay_sec": 0, 00:20:01.530 "fast_io_fail_timeout_sec": 0, 00:20:01.530 "disable_auto_failback": false, 00:20:01.530 "generate_uuids": false, 00:20:01.530 "transport_tos": 0, 00:20:01.530 "nvme_error_stat": false, 00:20:01.530 "rdma_srq_size": 0, 00:20:01.530 "io_path_stat": false, 00:20:01.530 "allow_accel_sequence": false, 00:20:01.530 "rdma_max_cq_size": 0, 00:20:01.530 "rdma_cm_event_timeout_ms": 0, 00:20:01.530 "dhchap_digests": [ 00:20:01.530 "sha256", 00:20:01.530 "sha384", 00:20:01.530 "sha512" 00:20:01.530 ], 00:20:01.530 "dhchap_dhgroups": [ 00:20:01.530 "null", 00:20:01.530 "ffdhe2048", 00:20:01.530 "ffdhe3072", 00:20:01.530 "ffdhe4096", 00:20:01.530 "ffdhe6144", 00:20:01.530 "ffdhe8192" 00:20:01.530 ] 00:20:01.530 } 00:20:01.530 }, 00:20:01.530 { 00:20:01.530 "method": "bdev_nvme_attach_controller", 00:20:01.530 "params": { 00:20:01.530 "name": "nvme0", 00:20:01.530 "trtype": "TCP", 00:20:01.530 "adrfam": "IPv4", 00:20:01.530 "traddr": "10.0.0.2", 00:20:01.530 "trsvcid": "4420", 00:20:01.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.530 "prchk_reftag": false, 00:20:01.530 "prchk_guard": false, 00:20:01.530 "ctrlr_loss_timeout_sec": 0, 00:20:01.530 "reconnect_delay_sec": 0, 00:20:01.530 "fast_io_fail_timeout_sec": 0, 00:20:01.530 "psk": "key0", 00:20:01.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.530 "hdgst": false, 00:20:01.530 "ddgst": false, 00:20:01.530 "multipath": "multipath" 00:20:01.530 } 00:20:01.530 }, 00:20:01.530 { 00:20:01.530 "method": "bdev_nvme_set_hotplug", 00:20:01.530 "params": { 00:20:01.530 "period_us": 100000, 00:20:01.530 "enable": false 00:20:01.530 } 00:20:01.530 }, 00:20:01.530 { 00:20:01.530 "method": "bdev_enable_histogram", 00:20:01.530 "params": { 00:20:01.530 "name": "nvme0n1", 00:20:01.530 "enable": true 00:20:01.530 } 00:20:01.530 }, 00:20:01.530 { 00:20:01.530 "method": "bdev_wait_for_examine" 00:20:01.530 } 00:20:01.530 ] 00:20:01.530 }, 00:20:01.530 { 00:20:01.530 "subsystem": "nbd", 00:20:01.530 "config": [] 00:20:01.530 } 00:20:01.530 ] 00:20:01.530 }' 00:20:01.530 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.530 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.790 [2024-10-08 18:27:54.857199] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:20:01.790 [2024-10-08 18:27:54.857245] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449017 ] 00:20:01.790 [2024-10-08 18:27:54.924063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.790 [2024-10-08 18:27:54.995829] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.049 [2024-10-08 18:27:55.148295] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.616 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:02.616 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:02.616 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:02.616 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:02.616 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.616 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:02.875 Running I/O for 1 seconds... 00:20:03.813 5331.00 IOPS, 20.82 MiB/s 00:20:03.813 Latency(us) 00:20:03.813 [2024-10-08T16:27:57.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.813 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:03.813 Verification LBA range: start 0x0 length 0x2000 00:20:03.813 nvme0n1 : 1.01 5388.84 21.05 0.00 0.00 23590.88 5461.33 30957.96 00:20:03.813 [2024-10-08T16:27:57.136Z] =================================================================================================================== 00:20:03.813 [2024-10-08T16:27:57.136Z] Total : 5388.84 21.05 0.00 0.00 23590.88 5461.33 30957.96 00:20:03.813 { 00:20:03.813 "results": [ 00:20:03.813 { 00:20:03.813 "job": "nvme0n1", 00:20:03.813 "core_mask": "0x2", 00:20:03.813 "workload": "verify", 00:20:03.813 "status": "finished", 00:20:03.813 "verify_range": { 00:20:03.813 "start": 0, 00:20:03.813 "length": 8192 00:20:03.813 }, 00:20:03.813 "queue_depth": 128, 00:20:03.813 "io_size": 4096, 00:20:03.813 "runtime": 1.013206, 00:20:03.813 "iops": 5388.835044403606, 00:20:03.813 "mibps": 21.050136892201586, 00:20:03.813 "io_failed": 0, 00:20:03.813 "io_timeout": 0, 00:20:03.813 "avg_latency_us": 23590.876634222917, 00:20:03.813 "min_latency_us": 5461.333333333333, 00:20:03.813 "max_latency_us": 30957.958095238097 00:20:03.813 } 00:20:03.813 ], 00:20:03.813 "core_count": 1 00:20:03.813 } 00:20:03.813 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:03.813 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:03.813 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:03.813 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:03.813 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:03.813 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:03.813 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:03.813 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:03.813 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:03.813 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:03.813 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:03.813 nvmf_trace.0 00:20:03.814 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:03.814 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 449017 00:20:03.814 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 449017 ']' 00:20:03.814 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 449017 00:20:03.814 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:03.814 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.814 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 449017 00:20:04.072 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:04.072 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:04.072 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 449017' 00:20:04.072 killing process with pid 449017 00:20:04.072 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 449017 00:20:04.072 Received shutdown signal, test time was about 1.000000 seconds 00:20:04.072 00:20:04.072 Latency(us) 00:20:04.072 [2024-10-08T16:27:57.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.072 [2024-10-08T16:27:57.395Z] =================================================================================================================== 00:20:04.072 [2024-10-08T16:27:57.395Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.072 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 449017 00:20:04.072 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:04.072 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:04.072 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:04.072 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:04.072 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:04.073 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:04.073 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:04.073 rmmod nvme_tcp 00:20:04.073 rmmod nvme_fabrics 00:20:04.073 rmmod nvme_keyring 00:20:04.332 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:04.332 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:04.332 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:04.332 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 448772 ']' 00:20:04.332 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 448772 00:20:04.332 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 448772 ']' 00:20:04.332 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 448772 00:20:04.332 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:04.332 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:04.332 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 448772 00:20:04.332 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:04.332 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:04.332 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 448772' 00:20:04.332 killing process with pid 448772 00:20:04.332 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 448772 00:20:04.332 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 448772 00:20:04.591 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:04.591 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:04.591 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:04.591 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:04.591 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:20:04.591 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:04.591 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:20:04.591 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:04.591 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:04.591 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.591 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.591 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.498 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:06.498 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.qL6HjIgqAA /tmp/tmp.GnEcw9Uwa2 /tmp/tmp.oAhvjHIcuA 00:20:06.498 00:20:06.498 real 1m29.108s 00:20:06.498 user 2m18.647s 00:20:06.498 sys 0m31.122s 00:20:06.498 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:06.498 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.498 ************************************ 00:20:06.498 END TEST nvmf_tls 00:20:06.498 ************************************ 00:20:06.498 18:27:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:06.498 18:27:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:06.498 18:27:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:06.498 18:27:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:06.498 ************************************ 00:20:06.498 START TEST nvmf_fips 00:20:06.498 ************************************ 00:20:06.498 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:06.759 * Looking for test storage... 00:20:06.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:06.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.759 --rc genhtml_branch_coverage=1 00:20:06.759 --rc genhtml_function_coverage=1 00:20:06.759 --rc genhtml_legend=1 00:20:06.759 --rc geninfo_all_blocks=1 00:20:06.759 --rc geninfo_unexecuted_blocks=1 00:20:06.759 00:20:06.759 ' 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:06.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.759 --rc genhtml_branch_coverage=1 00:20:06.759 --rc genhtml_function_coverage=1 00:20:06.759 --rc genhtml_legend=1 00:20:06.759 --rc geninfo_all_blocks=1 00:20:06.759 --rc geninfo_unexecuted_blocks=1 00:20:06.759 00:20:06.759 ' 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:06.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.759 --rc genhtml_branch_coverage=1 00:20:06.759 --rc genhtml_function_coverage=1 00:20:06.759 --rc genhtml_legend=1 00:20:06.759 --rc geninfo_all_blocks=1 00:20:06.759 --rc geninfo_unexecuted_blocks=1 00:20:06.759 00:20:06.759 ' 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:06.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.759 --rc genhtml_branch_coverage=1 00:20:06.759 --rc genhtml_function_coverage=1 00:20:06.759 --rc genhtml_legend=1 00:20:06.759 --rc geninfo_all_blocks=1 00:20:06.759 --rc geninfo_unexecuted_blocks=1 00:20:06.759 00:20:06.759 ' 00:20:06.759 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:06.760 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:06.760 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.760 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.760 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.760 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.760 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.760 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.760 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.760 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.760 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.760 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:06.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:06.760 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:07.020 Error setting digest 00:20:07.020 40921F2F497F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:07.020 40921F2F497F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:07.020 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:13.591 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:13.592 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:13.592 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:13.592 Found net devices under 0000:86:00.0: cvl_0_0 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:13.592 Found net devices under 0000:86:00.1: cvl_0_1 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.592 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:13.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:20:13.592 00:20:13.592 --- 10.0.0.2 ping statistics --- 00:20:13.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.592 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:13.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:20:13.592 00:20:13.592 --- 10.0.0.1 ping statistics --- 00:20:13.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.592 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=453035 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 453035 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 453035 ']' 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:13.592 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.592 [2024-10-08 18:28:06.241153] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:20:13.592 [2024-10-08 18:28:06.241202] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.592 [2024-10-08 18:28:06.311995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.592 [2024-10-08 18:28:06.389410] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.592 [2024-10-08 18:28:06.389443] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.592 [2024-10-08 18:28:06.389450] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.592 [2024-10-08 18:28:06.389456] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.592 [2024-10-08 18:28:06.389461] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.592 [2024-10-08 18:28:06.389989] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.852 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:13.852 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:13.852 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:13.852 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:13.852 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.852 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.852 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:13.852 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:13.852 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:13.852 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.u06 00:20:13.852 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:13.852 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.u06 00:20:13.852 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.u06 00:20:13.852 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.u06 00:20:13.852 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:14.111 [2024-10-08 18:28:07.264783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.111 [2024-10-08 18:28:07.280790] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.111 [2024-10-08 18:28:07.280981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.111 malloc0 00:20:14.111 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:14.111 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=453267 00:20:14.111 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:14.111 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 453267 /var/tmp/bdevperf.sock 00:20:14.111 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 453267 ']' 00:20:14.111 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.111 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:14.111 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.111 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:14.111 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:14.111 [2024-10-08 18:28:07.429306] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:20:14.111 [2024-10-08 18:28:07.429359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453267 ] 00:20:14.371 [2024-10-08 18:28:07.498089] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.371 [2024-10-08 18:28:07.574732] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.939 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:14.939 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:14.940 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.u06 00:20:15.198 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:15.456 [2024-10-08 18:28:08.602996] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.456 TLSTESTn1 00:20:15.456 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:15.713 Running I/O for 10 seconds... 00:20:17.586 5248.00 IOPS, 20.50 MiB/s [2024-10-08T16:28:11.846Z] 5343.00 IOPS, 20.87 MiB/s [2024-10-08T16:28:13.223Z] 5441.00 IOPS, 21.25 MiB/s [2024-10-08T16:28:14.159Z] 5482.00 IOPS, 21.41 MiB/s [2024-10-08T16:28:15.096Z] 5500.60 IOPS, 21.49 MiB/s [2024-10-08T16:28:16.033Z] 5514.33 IOPS, 21.54 MiB/s [2024-10-08T16:28:16.969Z] 5518.00 IOPS, 21.55 MiB/s [2024-10-08T16:28:17.906Z] 5539.50 IOPS, 21.64 MiB/s [2024-10-08T16:28:18.843Z] 5550.11 IOPS, 21.68 MiB/s [2024-10-08T16:28:18.843Z] 5553.20 IOPS, 21.69 MiB/s 00:20:25.520 Latency(us) 00:20:25.520 [2024-10-08T16:28:18.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.520 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:25.520 Verification LBA range: start 0x0 length 0x2000 00:20:25.520 TLSTESTn1 : 10.01 5557.93 21.71 0.00 0.00 22994.38 5055.63 21845.33 00:20:25.520 [2024-10-08T16:28:18.843Z] =================================================================================================================== 00:20:25.520 [2024-10-08T16:28:18.843Z] Total : 5557.93 21.71 0.00 0.00 22994.38 5055.63 21845.33 00:20:25.521 { 00:20:25.521 "results": [ 00:20:25.521 { 00:20:25.521 "job": "TLSTESTn1", 00:20:25.521 "core_mask": "0x4", 00:20:25.521 "workload": "verify", 00:20:25.521 "status": "finished", 00:20:25.521 "verify_range": { 00:20:25.521 "start": 0, 00:20:25.521 "length": 8192 00:20:25.521 }, 00:20:25.521 "queue_depth": 128, 00:20:25.521 "io_size": 4096, 00:20:25.521 "runtime": 10.014167, 00:20:25.521 "iops": 5557.926086113803, 00:20:25.521 "mibps": 21.710648773882042, 00:20:25.521 "io_failed": 0, 00:20:25.521 "io_timeout": 0, 00:20:25.521 "avg_latency_us": 22994.37848876386, 00:20:25.521 "min_latency_us": 5055.634285714285, 00:20:25.521 "max_latency_us": 21845.333333333332 00:20:25.521 } 00:20:25.521 ], 00:20:25.521 "core_count": 1 00:20:25.521 } 00:20:25.521 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:25.521 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:25.521 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:25.521 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:25.521 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:25.780 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:25.780 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:25.780 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:25.780 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:25.781 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:25.781 nvmf_trace.0 00:20:25.781 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:25.781 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 453267 00:20:25.781 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 453267 ']' 00:20:25.781 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 453267 00:20:25.781 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:25.781 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:25.781 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 453267 00:20:25.781 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:25.781 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:25.781 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 453267' 00:20:25.781 killing process with pid 453267 00:20:25.781 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 453267 00:20:25.781 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.781 00:20:25.781 Latency(us) 00:20:25.781 [2024-10-08T16:28:19.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.781 [2024-10-08T16:28:19.104Z] =================================================================================================================== 00:20:25.781 [2024-10-08T16:28:19.104Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.781 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 453267 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:26.040 rmmod nvme_tcp 00:20:26.040 rmmod nvme_fabrics 00:20:26.040 rmmod nvme_keyring 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 453035 ']' 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 453035 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 453035 ']' 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 453035 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 453035 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 453035' 00:20:26.040 killing process with pid 453035 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 453035 00:20:26.040 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 453035 00:20:26.299 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:26.299 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:26.299 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:26.299 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:26.299 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:20:26.299 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:26.299 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:20:26.299 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:26.299 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:26.299 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.299 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.299 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.u06 00:20:28.834 00:20:28.834 real 0m21.754s 00:20:28.834 user 0m23.282s 00:20:28.834 sys 0m9.849s 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:28.834 ************************************ 00:20:28.834 END TEST nvmf_fips 00:20:28.834 ************************************ 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.834 ************************************ 00:20:28.834 START TEST nvmf_control_msg_list 00:20:28.834 ************************************ 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:28.834 * Looking for test storage... 00:20:28.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:28.834 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:28.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.835 --rc genhtml_branch_coverage=1 00:20:28.835 --rc genhtml_function_coverage=1 00:20:28.835 --rc genhtml_legend=1 00:20:28.835 --rc geninfo_all_blocks=1 00:20:28.835 --rc geninfo_unexecuted_blocks=1 00:20:28.835 00:20:28.835 ' 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:28.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.835 --rc genhtml_branch_coverage=1 00:20:28.835 --rc genhtml_function_coverage=1 00:20:28.835 --rc genhtml_legend=1 00:20:28.835 --rc geninfo_all_blocks=1 00:20:28.835 --rc geninfo_unexecuted_blocks=1 00:20:28.835 00:20:28.835 ' 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:28.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.835 --rc genhtml_branch_coverage=1 00:20:28.835 --rc genhtml_function_coverage=1 00:20:28.835 --rc genhtml_legend=1 00:20:28.835 --rc geninfo_all_blocks=1 00:20:28.835 --rc geninfo_unexecuted_blocks=1 00:20:28.835 00:20:28.835 ' 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:28.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.835 --rc genhtml_branch_coverage=1 00:20:28.835 --rc genhtml_function_coverage=1 00:20:28.835 --rc genhtml_legend=1 00:20:28.835 --rc geninfo_all_blocks=1 00:20:28.835 --rc geninfo_unexecuted_blocks=1 00:20:28.835 00:20:28.835 ' 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:28.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:28.835 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:34.217 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:34.217 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.217 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:34.218 Found net devices under 0000:86:00.0: cvl_0_0 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:34.218 Found net devices under 0000:86:00.1: cvl_0_1 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.218 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:34.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:20:34.477 00:20:34.477 --- 10.0.0.2 ping statistics --- 00:20:34.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.477 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:20:34.477 00:20:34.477 --- 10.0.0.1 ping statistics --- 00:20:34.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.477 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=458672 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 458672 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 458672 ']' 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:34.477 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.478 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:34.478 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.737 [2024-10-08 18:28:27.815914] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:20:34.737 [2024-10-08 18:28:27.815956] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.737 [2024-10-08 18:28:27.884600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.737 [2024-10-08 18:28:27.956109] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.737 [2024-10-08 18:28:27.956150] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.737 [2024-10-08 18:28:27.956156] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.737 [2024-10-08 18:28:27.956162] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.737 [2024-10-08 18:28:27.956166] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.737 [2024-10-08 18:28:27.956721] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:35.677 [2024-10-08 18:28:28.700929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:35.677 Malloc0 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:35.677 [2024-10-08 18:28:28.753523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=458901 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=458903 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=458905 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:35.677 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 458901 00:20:35.677 [2024-10-08 18:28:28.832265] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:35.677 [2024-10-08 18:28:28.832483] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:35.677 [2024-10-08 18:28:28.832715] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:36.614 Initializing NVMe Controllers 00:20:36.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:36.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:36.614 Initialization complete. Launching workers. 00:20:36.614 ======================================================== 00:20:36.614 Latency(us) 00:20:36.614 Device Information : IOPS MiB/s Average min max 00:20:36.614 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40891.09 40693.48 41001.76 00:20:36.614 ======================================================== 00:20:36.614 Total : 25.00 0.10 40891.09 40693.48 41001.76 00:20:36.614 00:20:36.614 Initializing NVMe Controllers 00:20:36.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:36.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:36.614 Initialization complete. Launching workers. 00:20:36.614 ======================================================== 00:20:36.614 Latency(us) 00:20:36.614 Device Information : IOPS MiB/s Average min max 00:20:36.614 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 7026.96 27.45 141.98 131.98 359.53 00:20:36.614 ======================================================== 00:20:36.614 Total : 7026.96 27.45 141.98 131.98 359.53 00:20:36.614 00:20:36.874 Initializing NVMe Controllers 00:20:36.874 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:36.874 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:36.874 Initialization complete. Launching workers. 00:20:36.874 ======================================================== 00:20:36.874 Latency(us) 00:20:36.874 Device Information : IOPS MiB/s Average min max 00:20:36.874 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40881.18 40401.53 40994.43 00:20:36.874 ======================================================== 00:20:36.874 Total : 25.00 0.10 40881.18 40401.53 40994.43 00:20:36.874 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 458903 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 458905 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:36.874 rmmod nvme_tcp 00:20:36.874 rmmod nvme_fabrics 00:20:36.874 rmmod nvme_keyring 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 458672 ']' 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 458672 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 458672 ']' 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 458672 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 458672 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 458672' 00:20:36.874 killing process with pid 458672 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 458672 00:20:36.874 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 458672 00:20:37.134 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:37.134 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:37.134 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:37.134 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:37.134 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:20:37.134 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:37.134 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:20:37.134 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:37.134 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:37.134 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.134 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.134 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.670 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:39.671 00:20:39.671 real 0m10.768s 00:20:39.671 user 0m7.404s 00:20:39.671 sys 0m5.438s 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:39.671 ************************************ 00:20:39.671 END TEST nvmf_control_msg_list 00:20:39.671 ************************************ 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:39.671 ************************************ 00:20:39.671 START TEST nvmf_wait_for_buf 00:20:39.671 ************************************ 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:39.671 * Looking for test storage... 00:20:39.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:39.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.671 --rc genhtml_branch_coverage=1 00:20:39.671 --rc genhtml_function_coverage=1 00:20:39.671 --rc genhtml_legend=1 00:20:39.671 --rc geninfo_all_blocks=1 00:20:39.671 --rc geninfo_unexecuted_blocks=1 00:20:39.671 00:20:39.671 ' 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:39.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.671 --rc genhtml_branch_coverage=1 00:20:39.671 --rc genhtml_function_coverage=1 00:20:39.671 --rc genhtml_legend=1 00:20:39.671 --rc geninfo_all_blocks=1 00:20:39.671 --rc geninfo_unexecuted_blocks=1 00:20:39.671 00:20:39.671 ' 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:39.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.671 --rc genhtml_branch_coverage=1 00:20:39.671 --rc genhtml_function_coverage=1 00:20:39.671 --rc genhtml_legend=1 00:20:39.671 --rc geninfo_all_blocks=1 00:20:39.671 --rc geninfo_unexecuted_blocks=1 00:20:39.671 00:20:39.671 ' 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:39.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.671 --rc genhtml_branch_coverage=1 00:20:39.671 --rc genhtml_function_coverage=1 00:20:39.671 --rc genhtml_legend=1 00:20:39.671 --rc geninfo_all_blocks=1 00:20:39.671 --rc geninfo_unexecuted_blocks=1 00:20:39.671 00:20:39.671 ' 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:39.671 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:39.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:39.672 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:46.244 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:46.244 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:46.245 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:46.245 Found net devices under 0000:86:00.0: cvl_0_0 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:46.245 Found net devices under 0000:86:00.1: cvl_0_1 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:46.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:20:46.245 00:20:46.245 --- 10.0.0.2 ping statistics --- 00:20:46.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.245 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:46.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:20:46.245 00:20:46.245 --- 10.0.0.1 ping statistics --- 00:20:46.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.245 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=462606 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 462606 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 462606 ']' 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:46.245 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:46.245 [2024-10-08 18:28:38.720369] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:20:46.245 [2024-10-08 18:28:38.720419] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.245 [2024-10-08 18:28:38.790828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.245 [2024-10-08 18:28:38.866122] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.245 [2024-10-08 18:28:38.866165] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.245 [2024-10-08 18:28:38.866172] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.245 [2024-10-08 18:28:38.866178] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.245 [2024-10-08 18:28:38.866183] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.245 [2024-10-08 18:28:38.866760] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.245 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:46.245 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:20:46.245 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:46.245 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:46.245 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:46.505 Malloc0 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:46.505 [2024-10-08 18:28:39.697995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:46.505 [2024-10-08 18:28:39.722161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.505 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:46.505 [2024-10-08 18:28:39.802471] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:48.409 Initializing NVMe Controllers 00:20:48.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:48.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:48.409 Initialization complete. Launching workers. 00:20:48.409 ======================================================== 00:20:48.409 Latency(us) 00:20:48.409 Device Information : IOPS MiB/s Average min max 00:20:48.409 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33538.14 30923.22 71063.36 00:20:48.409 ======================================================== 00:20:48.409 Total : 124.00 15.50 33538.14 30923.22 71063.36 00:20:48.409 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:48.409 rmmod nvme_tcp 00:20:48.409 rmmod nvme_fabrics 00:20:48.409 rmmod nvme_keyring 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 462606 ']' 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 462606 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 462606 ']' 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 462606 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 462606 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 462606' 00:20:48.409 killing process with pid 462606 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 462606 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 462606 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:20:48.409 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:48.410 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:48.410 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:48.410 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.410 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.410 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.945 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:50.945 00:20:50.945 real 0m11.225s 00:20:50.945 user 0m4.866s 00:20:50.945 sys 0m4.978s 00:20:50.945 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:50.945 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:50.945 ************************************ 00:20:50.945 END TEST nvmf_wait_for_buf 00:20:50.945 ************************************ 00:20:50.945 18:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:50.945 18:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:50.945 18:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:50.945 18:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:50.945 18:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:50.945 18:28:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:56.224 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:56.224 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:56.224 Found net devices under 0000:86:00.0: cvl_0_0 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.224 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:56.224 Found net devices under 0000:86:00.1: cvl_0_1 00:20:56.225 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.225 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:56.225 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.225 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:56.225 18:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:56.225 18:28:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:56.225 18:28:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:56.225 18:28:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:56.225 ************************************ 00:20:56.225 START TEST nvmf_perf_adq 00:20:56.225 ************************************ 00:20:56.225 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:56.225 * Looking for test storage... 00:20:56.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:56.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.484 --rc genhtml_branch_coverage=1 00:20:56.484 --rc genhtml_function_coverage=1 00:20:56.484 --rc genhtml_legend=1 00:20:56.484 --rc geninfo_all_blocks=1 00:20:56.484 --rc geninfo_unexecuted_blocks=1 00:20:56.484 00:20:56.484 ' 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:56.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.484 --rc genhtml_branch_coverage=1 00:20:56.484 --rc genhtml_function_coverage=1 00:20:56.484 --rc genhtml_legend=1 00:20:56.484 --rc geninfo_all_blocks=1 00:20:56.484 --rc geninfo_unexecuted_blocks=1 00:20:56.484 00:20:56.484 ' 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:56.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.484 --rc genhtml_branch_coverage=1 00:20:56.484 --rc genhtml_function_coverage=1 00:20:56.484 --rc genhtml_legend=1 00:20:56.484 --rc geninfo_all_blocks=1 00:20:56.484 --rc geninfo_unexecuted_blocks=1 00:20:56.484 00:20:56.484 ' 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:56.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.484 --rc genhtml_branch_coverage=1 00:20:56.484 --rc genhtml_function_coverage=1 00:20:56.484 --rc genhtml_legend=1 00:20:56.484 --rc geninfo_all_blocks=1 00:20:56.484 --rc geninfo_unexecuted_blocks=1 00:20:56.484 00:20:56.484 ' 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.484 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:56.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:56.485 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:03.054 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:03.054 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:03.054 Found net devices under 0000:86:00.0: cvl_0_0 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:03.054 Found net devices under 0000:86:00.1: cvl_0_1 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:03.054 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:03.313 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:05.221 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:10.498 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:10.498 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:10.499 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:10.499 Found net devices under 0000:86:00.0: cvl_0_0 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:10.499 Found net devices under 0000:86:00.1: cvl_0_1 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:10.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:21:10.499 00:21:10.499 --- 10.0.0.2 ping statistics --- 00:21:10.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.499 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:21:10.499 00:21:10.499 --- 10.0.0.1 ping statistics --- 00:21:10.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.499 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=471023 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 471023 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 471023 ']' 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:10.499 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.499 [2024-10-08 18:29:03.745947] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:21:10.499 [2024-10-08 18:29:03.745993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.499 [2024-10-08 18:29:03.817509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.758 [2024-10-08 18:29:03.897967] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.758 [2024-10-08 18:29:03.898003] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.758 [2024-10-08 18:29:03.898010] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.758 [2024-10-08 18:29:03.898017] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.758 [2024-10-08 18:29:03.898022] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.758 [2024-10-08 18:29:03.899540] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.758 [2024-10-08 18:29:03.899576] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.758 [2024-10-08 18:29:03.899693] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.758 [2024-10-08 18:29:03.899694] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.326 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.326 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:11.326 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:11.326 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:11.326 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.326 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.326 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:11.326 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:11.326 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:11.326 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.326 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.326 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.584 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:11.584 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:11.584 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.585 [2024-10-08 18:29:04.778756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.585 Malloc1 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.585 [2024-10-08 18:29:04.822264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=471271 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:11.585 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:14.115 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:14.115 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.115 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:14.115 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.115 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:14.115 "tick_rate": 2100000000, 00:21:14.115 "poll_groups": [ 00:21:14.115 { 00:21:14.115 "name": "nvmf_tgt_poll_group_000", 00:21:14.115 "admin_qpairs": 1, 00:21:14.115 "io_qpairs": 1, 00:21:14.115 "current_admin_qpairs": 1, 00:21:14.115 "current_io_qpairs": 1, 00:21:14.115 "pending_bdev_io": 0, 00:21:14.115 "completed_nvme_io": 20319, 00:21:14.115 "transports": [ 00:21:14.115 { 00:21:14.115 "trtype": "TCP" 00:21:14.115 } 00:21:14.115 ] 00:21:14.115 }, 00:21:14.115 { 00:21:14.115 "name": "nvmf_tgt_poll_group_001", 00:21:14.115 "admin_qpairs": 0, 00:21:14.115 "io_qpairs": 1, 00:21:14.115 "current_admin_qpairs": 0, 00:21:14.115 "current_io_qpairs": 1, 00:21:14.115 "pending_bdev_io": 0, 00:21:14.115 "completed_nvme_io": 20375, 00:21:14.115 "transports": [ 00:21:14.115 { 00:21:14.115 "trtype": "TCP" 00:21:14.115 } 00:21:14.115 ] 00:21:14.115 }, 00:21:14.115 { 00:21:14.115 "name": "nvmf_tgt_poll_group_002", 00:21:14.115 "admin_qpairs": 0, 00:21:14.115 "io_qpairs": 1, 00:21:14.115 "current_admin_qpairs": 0, 00:21:14.115 "current_io_qpairs": 1, 00:21:14.115 "pending_bdev_io": 0, 00:21:14.115 "completed_nvme_io": 20213, 00:21:14.115 "transports": [ 00:21:14.115 { 00:21:14.115 "trtype": "TCP" 00:21:14.115 } 00:21:14.115 ] 00:21:14.115 }, 00:21:14.115 { 00:21:14.115 "name": "nvmf_tgt_poll_group_003", 00:21:14.115 "admin_qpairs": 0, 00:21:14.115 "io_qpairs": 1, 00:21:14.115 "current_admin_qpairs": 0, 00:21:14.115 "current_io_qpairs": 1, 00:21:14.115 "pending_bdev_io": 0, 00:21:14.115 "completed_nvme_io": 19879, 00:21:14.115 "transports": [ 00:21:14.115 { 00:21:14.115 "trtype": "TCP" 00:21:14.115 } 00:21:14.115 ] 00:21:14.115 } 00:21:14.115 ] 00:21:14.115 }' 00:21:14.115 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:14.115 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:14.115 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:14.115 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:14.115 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 471271 00:21:22.226 Initializing NVMe Controllers 00:21:22.226 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:22.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:22.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:22.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:22.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:22.226 Initialization complete. Launching workers. 00:21:22.226 ======================================================== 00:21:22.226 Latency(us) 00:21:22.226 Device Information : IOPS MiB/s Average min max 00:21:22.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10665.58 41.66 6000.35 1387.53 10171.38 00:21:22.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10868.97 42.46 5888.23 2313.08 9815.81 00:21:22.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10749.98 41.99 5952.43 1498.43 10137.95 00:21:22.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10751.48 42.00 5951.63 2335.54 13086.62 00:21:22.226 ======================================================== 00:21:22.226 Total : 43036.00 168.11 5947.89 1387.53 13086.62 00:21:22.226 00:21:22.226 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:22.226 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:22.226 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:22.226 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:22.226 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:22.226 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:22.226 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:22.226 rmmod nvme_tcp 00:21:22.226 rmmod nvme_fabrics 00:21:22.226 rmmod nvme_keyring 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 471023 ']' 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 471023 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 471023 ']' 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 471023 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 471023 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 471023' 00:21:22.226 killing process with pid 471023 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 471023 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 471023 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.226 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.131 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:24.131 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:24.131 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:24.131 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:25.509 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:27.415 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:32.690 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:32.691 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:32.691 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:32.691 Found net devices under 0000:86:00.0: cvl_0_0 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:32.691 Found net devices under 0000:86:00.1: cvl_0_1 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:32.691 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:32.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:21:32.692 00:21:32.692 --- 10.0.0.2 ping statistics --- 00:21:32.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.692 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:32.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:21:32.692 00:21:32.692 --- 10.0.0.1 ping statistics --- 00:21:32.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.692 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:32.692 net.core.busy_poll = 1 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:32.692 net.core.busy_read = 1 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:32.692 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.692 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=475051 00:21:32.692 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 475051 00:21:32.692 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:32.692 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 475051 ']' 00:21:32.692 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.692 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:32.692 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.692 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:32.692 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.951 [2024-10-08 18:29:26.060706] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:21:32.951 [2024-10-08 18:29:26.060754] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.951 [2024-10-08 18:29:26.131741] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:32.952 [2024-10-08 18:29:26.201930] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.952 [2024-10-08 18:29:26.201973] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.952 [2024-10-08 18:29:26.201980] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.952 [2024-10-08 18:29:26.201986] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.952 [2024-10-08 18:29:26.201991] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.952 [2024-10-08 18:29:26.203663] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.952 [2024-10-08 18:29:26.203771] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.952 [2024-10-08 18:29:26.203877] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.952 [2024-10-08 18:29:26.203878] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.888 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 [2024-10-08 18:29:27.078472] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 Malloc1 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 [2024-10-08 18:29:27.130057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=475191 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:33.888 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:36.422 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:36.423 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.423 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:36.423 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.423 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:36.423 "tick_rate": 2100000000, 00:21:36.423 "poll_groups": [ 00:21:36.423 { 00:21:36.423 "name": "nvmf_tgt_poll_group_000", 00:21:36.423 "admin_qpairs": 1, 00:21:36.423 "io_qpairs": 2, 00:21:36.423 "current_admin_qpairs": 1, 00:21:36.423 "current_io_qpairs": 2, 00:21:36.423 "pending_bdev_io": 0, 00:21:36.423 "completed_nvme_io": 29124, 00:21:36.423 "transports": [ 00:21:36.423 { 00:21:36.423 "trtype": "TCP" 00:21:36.423 } 00:21:36.423 ] 00:21:36.423 }, 00:21:36.423 { 00:21:36.423 "name": "nvmf_tgt_poll_group_001", 00:21:36.423 "admin_qpairs": 0, 00:21:36.423 "io_qpairs": 2, 00:21:36.423 "current_admin_qpairs": 0, 00:21:36.423 "current_io_qpairs": 2, 00:21:36.423 "pending_bdev_io": 0, 00:21:36.423 "completed_nvme_io": 28681, 00:21:36.423 "transports": [ 00:21:36.423 { 00:21:36.423 "trtype": "TCP" 00:21:36.423 } 00:21:36.423 ] 00:21:36.423 }, 00:21:36.423 { 00:21:36.423 "name": "nvmf_tgt_poll_group_002", 00:21:36.423 "admin_qpairs": 0, 00:21:36.423 "io_qpairs": 0, 00:21:36.423 "current_admin_qpairs": 0, 00:21:36.423 "current_io_qpairs": 0, 00:21:36.423 "pending_bdev_io": 0, 00:21:36.423 "completed_nvme_io": 0, 00:21:36.423 "transports": [ 00:21:36.423 { 00:21:36.423 "trtype": "TCP" 00:21:36.423 } 00:21:36.423 ] 00:21:36.423 }, 00:21:36.423 { 00:21:36.423 "name": "nvmf_tgt_poll_group_003", 00:21:36.423 "admin_qpairs": 0, 00:21:36.423 "io_qpairs": 0, 00:21:36.423 "current_admin_qpairs": 0, 00:21:36.423 "current_io_qpairs": 0, 00:21:36.423 "pending_bdev_io": 0, 00:21:36.423 "completed_nvme_io": 0, 00:21:36.423 "transports": [ 00:21:36.423 { 00:21:36.423 "trtype": "TCP" 00:21:36.423 } 00:21:36.423 ] 00:21:36.423 } 00:21:36.423 ] 00:21:36.423 }' 00:21:36.423 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:36.423 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:36.423 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:36.423 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:36.423 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 475191 00:21:44.542 Initializing NVMe Controllers 00:21:44.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:44.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:44.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:44.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:44.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:44.543 Initialization complete. Launching workers. 00:21:44.543 ======================================================== 00:21:44.543 Latency(us) 00:21:44.543 Device Information : IOPS MiB/s Average min max 00:21:44.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7880.60 30.78 8145.44 1398.60 53747.58 00:21:44.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8201.70 32.04 7813.81 1209.26 53741.31 00:21:44.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7258.20 28.35 8817.67 1573.23 51925.21 00:21:44.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6958.80 27.18 9195.62 1464.38 52714.22 00:21:44.543 ======================================================== 00:21:44.543 Total : 30299.29 118.36 8457.90 1209.26 53747.58 00:21:44.543 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:44.543 rmmod nvme_tcp 00:21:44.543 rmmod nvme_fabrics 00:21:44.543 rmmod nvme_keyring 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 475051 ']' 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 475051 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 475051 ']' 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 475051 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 475051 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 475051' 00:21:44.543 killing process with pid 475051 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 475051 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 475051 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.543 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.583 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:46.583 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:46.583 00:21:46.583 real 0m50.279s 00:21:46.583 user 2m49.494s 00:21:46.583 sys 0m10.347s 00:21:46.583 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:46.583 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.583 ************************************ 00:21:46.583 END TEST nvmf_perf_adq 00:21:46.583 ************************************ 00:21:46.583 18:29:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:46.583 18:29:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:46.583 18:29:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:46.583 18:29:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:46.583 ************************************ 00:21:46.583 START TEST nvmf_shutdown 00:21:46.583 ************************************ 00:21:46.583 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:46.583 * Looking for test storage... 00:21:46.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:46.583 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:46.583 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:21:46.583 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:46.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.843 --rc genhtml_branch_coverage=1 00:21:46.843 --rc genhtml_function_coverage=1 00:21:46.843 --rc genhtml_legend=1 00:21:46.843 --rc geninfo_all_blocks=1 00:21:46.843 --rc geninfo_unexecuted_blocks=1 00:21:46.843 00:21:46.843 ' 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:46.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.843 --rc genhtml_branch_coverage=1 00:21:46.843 --rc genhtml_function_coverage=1 00:21:46.843 --rc genhtml_legend=1 00:21:46.843 --rc geninfo_all_blocks=1 00:21:46.843 --rc geninfo_unexecuted_blocks=1 00:21:46.843 00:21:46.843 ' 00:21:46.843 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:46.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.843 --rc genhtml_branch_coverage=1 00:21:46.843 --rc genhtml_function_coverage=1 00:21:46.843 --rc genhtml_legend=1 00:21:46.843 --rc geninfo_all_blocks=1 00:21:46.843 --rc geninfo_unexecuted_blocks=1 00:21:46.843 00:21:46.843 ' 00:21:46.844 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:46.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.844 --rc genhtml_branch_coverage=1 00:21:46.844 --rc genhtml_function_coverage=1 00:21:46.844 --rc genhtml_legend=1 00:21:46.844 --rc geninfo_all_blocks=1 00:21:46.844 --rc geninfo_unexecuted_blocks=1 00:21:46.844 00:21:46.844 ' 00:21:46.844 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.844 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:46.844 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.844 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.844 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.844 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.844 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.844 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.844 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.844 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.844 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.844 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:46.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:46.844 ************************************ 00:21:46.844 START TEST nvmf_shutdown_tc1 00:21:46.844 ************************************ 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:46.844 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:53.415 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:53.415 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:53.415 Found net devices under 0000:86:00.0: cvl_0_0 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:53.415 Found net devices under 0000:86:00.1: cvl_0_1 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:53.415 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.415 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:53.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:21:53.416 00:21:53.416 --- 10.0.0.2 ping statistics --- 00:21:53.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.416 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:21:53.416 00:21:53.416 --- 10.0.0.1 ping statistics --- 00:21:53.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.416 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=480535 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 480535 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 480535 ']' 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:53.416 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:53.416 [2024-10-08 18:29:46.143876] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:21:53.416 [2024-10-08 18:29:46.143929] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.416 [2024-10-08 18:29:46.216632] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:53.416 [2024-10-08 18:29:46.295372] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.416 [2024-10-08 18:29:46.295412] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.416 [2024-10-08 18:29:46.295420] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.416 [2024-10-08 18:29:46.295426] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.416 [2024-10-08 18:29:46.295432] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.416 [2024-10-08 18:29:46.296940] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.416 [2024-10-08 18:29:46.297048] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.416 [2024-10-08 18:29:46.297156] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.416 [2024-10-08 18:29:46.297157] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:21:53.674 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:53.674 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:53.674 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:53.674 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:53.674 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:53.936 [2024-10-08 18:29:47.033537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:53.936 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.937 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:53.937 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.937 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:53.937 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.937 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:53.937 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:53.937 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.937 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:53.937 Malloc1 00:21:53.937 [2024-10-08 18:29:47.128816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.937 Malloc2 00:21:53.937 Malloc3 00:21:53.937 Malloc4 00:21:54.198 Malloc5 00:21:54.198 Malloc6 00:21:54.198 Malloc7 00:21:54.198 Malloc8 00:21:54.198 Malloc9 00:21:54.198 Malloc10 00:21:54.198 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.198 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:54.198 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:54.198 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=480811 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 480811 /var/tmp/bdevperf.sock 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 480811 ']' 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:54.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:54.458 { 00:21:54.458 "params": { 00:21:54.458 "name": "Nvme$subsystem", 00:21:54.458 "trtype": "$TEST_TRANSPORT", 00:21:54.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.458 "adrfam": "ipv4", 00:21:54.458 "trsvcid": "$NVMF_PORT", 00:21:54.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.458 "hdgst": ${hdgst:-false}, 00:21:54.458 "ddgst": ${ddgst:-false} 00:21:54.458 }, 00:21:54.458 "method": "bdev_nvme_attach_controller" 00:21:54.458 } 00:21:54.458 EOF 00:21:54.458 )") 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:54.458 { 00:21:54.458 "params": { 00:21:54.458 "name": "Nvme$subsystem", 00:21:54.458 "trtype": "$TEST_TRANSPORT", 00:21:54.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.458 "adrfam": "ipv4", 00:21:54.458 "trsvcid": "$NVMF_PORT", 00:21:54.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.458 "hdgst": ${hdgst:-false}, 00:21:54.458 "ddgst": ${ddgst:-false} 00:21:54.458 }, 00:21:54.458 "method": "bdev_nvme_attach_controller" 00:21:54.458 } 00:21:54.458 EOF 00:21:54.458 )") 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:54.458 { 00:21:54.458 "params": { 00:21:54.458 "name": "Nvme$subsystem", 00:21:54.458 "trtype": "$TEST_TRANSPORT", 00:21:54.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.458 "adrfam": "ipv4", 00:21:54.458 "trsvcid": "$NVMF_PORT", 00:21:54.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.458 "hdgst": ${hdgst:-false}, 00:21:54.458 "ddgst": ${ddgst:-false} 00:21:54.458 }, 00:21:54.458 "method": "bdev_nvme_attach_controller" 00:21:54.458 } 00:21:54.458 EOF 00:21:54.458 )") 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:54.458 { 00:21:54.458 "params": { 00:21:54.458 "name": "Nvme$subsystem", 00:21:54.458 "trtype": "$TEST_TRANSPORT", 00:21:54.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.458 "adrfam": "ipv4", 00:21:54.458 "trsvcid": "$NVMF_PORT", 00:21:54.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.458 "hdgst": ${hdgst:-false}, 00:21:54.458 "ddgst": ${ddgst:-false} 00:21:54.458 }, 00:21:54.458 "method": "bdev_nvme_attach_controller" 00:21:54.458 } 00:21:54.458 EOF 00:21:54.458 )") 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:54.458 { 00:21:54.458 "params": { 00:21:54.458 "name": "Nvme$subsystem", 00:21:54.458 "trtype": "$TEST_TRANSPORT", 00:21:54.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.458 "adrfam": "ipv4", 00:21:54.458 "trsvcid": "$NVMF_PORT", 00:21:54.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.458 "hdgst": ${hdgst:-false}, 00:21:54.458 "ddgst": ${ddgst:-false} 00:21:54.458 }, 00:21:54.458 "method": "bdev_nvme_attach_controller" 00:21:54.458 } 00:21:54.458 EOF 00:21:54.458 )") 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:54.458 { 00:21:54.458 "params": { 00:21:54.458 "name": "Nvme$subsystem", 00:21:54.458 "trtype": "$TEST_TRANSPORT", 00:21:54.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.458 "adrfam": "ipv4", 00:21:54.458 "trsvcid": "$NVMF_PORT", 00:21:54.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.458 "hdgst": ${hdgst:-false}, 00:21:54.458 "ddgst": ${ddgst:-false} 00:21:54.458 }, 00:21:54.458 "method": "bdev_nvme_attach_controller" 00:21:54.458 } 00:21:54.458 EOF 00:21:54.458 )") 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:54.458 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:54.458 { 00:21:54.458 "params": { 00:21:54.458 "name": "Nvme$subsystem", 00:21:54.458 "trtype": "$TEST_TRANSPORT", 00:21:54.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.458 "adrfam": "ipv4", 00:21:54.458 "trsvcid": "$NVMF_PORT", 00:21:54.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.458 "hdgst": ${hdgst:-false}, 00:21:54.458 "ddgst": ${ddgst:-false} 00:21:54.459 }, 00:21:54.459 "method": "bdev_nvme_attach_controller" 00:21:54.459 } 00:21:54.459 EOF 00:21:54.459 )") 00:21:54.459 [2024-10-08 18:29:47.600620] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:21:54.459 [2024-10-08 18:29:47.600666] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:54.459 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:54.459 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:54.459 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:54.459 { 00:21:54.459 "params": { 00:21:54.459 "name": "Nvme$subsystem", 00:21:54.459 "trtype": "$TEST_TRANSPORT", 00:21:54.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.459 "adrfam": "ipv4", 00:21:54.459 "trsvcid": "$NVMF_PORT", 00:21:54.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.459 "hdgst": ${hdgst:-false}, 00:21:54.459 "ddgst": ${ddgst:-false} 00:21:54.459 }, 00:21:54.459 "method": "bdev_nvme_attach_controller" 00:21:54.459 } 00:21:54.459 EOF 00:21:54.459 )") 00:21:54.459 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:54.459 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:54.459 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:54.459 { 00:21:54.459 "params": { 00:21:54.459 "name": "Nvme$subsystem", 00:21:54.459 "trtype": "$TEST_TRANSPORT", 00:21:54.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.459 "adrfam": "ipv4", 00:21:54.459 "trsvcid": "$NVMF_PORT", 00:21:54.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.459 "hdgst": ${hdgst:-false}, 00:21:54.459 "ddgst": ${ddgst:-false} 00:21:54.459 }, 00:21:54.459 "method": "bdev_nvme_attach_controller" 00:21:54.459 } 00:21:54.459 EOF 00:21:54.459 )") 00:21:54.459 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:54.459 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:54.459 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:54.459 { 00:21:54.459 "params": { 00:21:54.459 "name": "Nvme$subsystem", 00:21:54.459 "trtype": "$TEST_TRANSPORT", 00:21:54.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.459 "adrfam": "ipv4", 00:21:54.459 "trsvcid": "$NVMF_PORT", 00:21:54.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.459 "hdgst": ${hdgst:-false}, 00:21:54.459 "ddgst": ${ddgst:-false} 00:21:54.459 }, 00:21:54.459 "method": "bdev_nvme_attach_controller" 00:21:54.459 } 00:21:54.459 EOF 00:21:54.459 )") 00:21:54.459 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:54.459 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:21:54.459 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:21:54.459 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:54.459 "params": { 00:21:54.459 "name": "Nvme1", 00:21:54.459 "trtype": "tcp", 00:21:54.459 "traddr": "10.0.0.2", 00:21:54.459 "adrfam": "ipv4", 00:21:54.459 "trsvcid": "4420", 00:21:54.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:54.459 "hdgst": false, 00:21:54.459 "ddgst": false 00:21:54.459 }, 00:21:54.459 "method": "bdev_nvme_attach_controller" 00:21:54.459 },{ 00:21:54.459 "params": { 00:21:54.459 "name": "Nvme2", 00:21:54.459 "trtype": "tcp", 00:21:54.459 "traddr": "10.0.0.2", 00:21:54.459 "adrfam": "ipv4", 00:21:54.459 "trsvcid": "4420", 00:21:54.459 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:54.459 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:54.459 "hdgst": false, 00:21:54.459 "ddgst": false 00:21:54.459 }, 00:21:54.459 "method": "bdev_nvme_attach_controller" 00:21:54.459 },{ 00:21:54.459 "params": { 00:21:54.459 "name": "Nvme3", 00:21:54.459 "trtype": "tcp", 00:21:54.459 "traddr": "10.0.0.2", 00:21:54.459 "adrfam": "ipv4", 00:21:54.459 "trsvcid": "4420", 00:21:54.459 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:54.459 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:54.459 "hdgst": false, 00:21:54.459 "ddgst": false 00:21:54.459 }, 00:21:54.459 "method": "bdev_nvme_attach_controller" 00:21:54.459 },{ 00:21:54.459 "params": { 00:21:54.459 "name": "Nvme4", 00:21:54.459 "trtype": "tcp", 00:21:54.459 "traddr": "10.0.0.2", 00:21:54.459 "adrfam": "ipv4", 00:21:54.459 "trsvcid": "4420", 00:21:54.459 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:54.459 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:54.459 "hdgst": false, 00:21:54.459 "ddgst": false 00:21:54.459 }, 00:21:54.459 "method": "bdev_nvme_attach_controller" 00:21:54.459 },{ 00:21:54.459 "params": { 00:21:54.459 "name": "Nvme5", 00:21:54.459 "trtype": "tcp", 00:21:54.459 "traddr": "10.0.0.2", 00:21:54.459 "adrfam": "ipv4", 00:21:54.459 "trsvcid": "4420", 00:21:54.459 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:54.459 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:54.459 "hdgst": false, 00:21:54.459 "ddgst": false 00:21:54.459 }, 00:21:54.459 "method": "bdev_nvme_attach_controller" 00:21:54.459 },{ 00:21:54.459 "params": { 00:21:54.459 "name": "Nvme6", 00:21:54.459 "trtype": "tcp", 00:21:54.459 "traddr": "10.0.0.2", 00:21:54.459 "adrfam": "ipv4", 00:21:54.459 "trsvcid": "4420", 00:21:54.459 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:54.459 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:54.459 "hdgst": false, 00:21:54.459 "ddgst": false 00:21:54.459 }, 00:21:54.459 "method": "bdev_nvme_attach_controller" 00:21:54.459 },{ 00:21:54.459 "params": { 00:21:54.459 "name": "Nvme7", 00:21:54.459 "trtype": "tcp", 00:21:54.459 "traddr": "10.0.0.2", 00:21:54.459 "adrfam": "ipv4", 00:21:54.459 "trsvcid": "4420", 00:21:54.459 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:54.459 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:54.459 "hdgst": false, 00:21:54.459 "ddgst": false 00:21:54.459 }, 00:21:54.459 "method": "bdev_nvme_attach_controller" 00:21:54.459 },{ 00:21:54.459 "params": { 00:21:54.459 "name": "Nvme8", 00:21:54.459 "trtype": "tcp", 00:21:54.459 "traddr": "10.0.0.2", 00:21:54.459 "adrfam": "ipv4", 00:21:54.459 "trsvcid": "4420", 00:21:54.459 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:54.459 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:54.459 "hdgst": false, 00:21:54.459 "ddgst": false 00:21:54.459 }, 00:21:54.459 "method": "bdev_nvme_attach_controller" 00:21:54.459 },{ 00:21:54.459 "params": { 00:21:54.459 "name": "Nvme9", 00:21:54.459 "trtype": "tcp", 00:21:54.459 "traddr": "10.0.0.2", 00:21:54.459 "adrfam": "ipv4", 00:21:54.459 "trsvcid": "4420", 00:21:54.459 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:54.459 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:54.459 "hdgst": false, 00:21:54.459 "ddgst": false 00:21:54.459 }, 00:21:54.459 "method": "bdev_nvme_attach_controller" 00:21:54.459 },{ 00:21:54.459 "params": { 00:21:54.459 "name": "Nvme10", 00:21:54.459 "trtype": "tcp", 00:21:54.459 "traddr": "10.0.0.2", 00:21:54.459 "adrfam": "ipv4", 00:21:54.459 "trsvcid": "4420", 00:21:54.459 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:54.459 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:54.459 "hdgst": false, 00:21:54.459 "ddgst": false 00:21:54.459 }, 00:21:54.459 "method": "bdev_nvme_attach_controller" 00:21:54.459 }' 00:21:54.459 [2024-10-08 18:29:47.672615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.459 [2024-10-08 18:29:47.744580] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.362 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:56.362 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:56.362 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:56.363 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.363 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:56.363 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.363 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 480811 00:21:56.363 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:56.363 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:56.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 480811 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:56.930 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 480535 00:21:56.930 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:56.930 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:56.930 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:21:56.930 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:21:56.930 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:56.930 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:56.930 { 00:21:56.930 "params": { 00:21:56.930 "name": "Nvme$subsystem", 00:21:56.930 "trtype": "$TEST_TRANSPORT", 00:21:56.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.930 "adrfam": "ipv4", 00:21:56.930 "trsvcid": "$NVMF_PORT", 00:21:56.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.930 "hdgst": ${hdgst:-false}, 00:21:56.930 "ddgst": ${ddgst:-false} 00:21:56.930 }, 00:21:56.930 "method": "bdev_nvme_attach_controller" 00:21:56.930 } 00:21:56.930 EOF 00:21:56.930 )") 00:21:56.930 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:56.930 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:56.930 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:56.930 { 00:21:56.930 "params": { 00:21:56.930 "name": "Nvme$subsystem", 00:21:56.930 "trtype": "$TEST_TRANSPORT", 00:21:56.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.930 "adrfam": "ipv4", 00:21:56.930 "trsvcid": "$NVMF_PORT", 00:21:56.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.930 "hdgst": ${hdgst:-false}, 00:21:56.930 "ddgst": ${ddgst:-false} 00:21:56.930 }, 00:21:56.930 "method": "bdev_nvme_attach_controller" 00:21:56.930 } 00:21:56.930 EOF 00:21:56.930 )") 00:21:56.930 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:57.190 { 00:21:57.190 "params": { 00:21:57.190 "name": "Nvme$subsystem", 00:21:57.190 "trtype": "$TEST_TRANSPORT", 00:21:57.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.190 "adrfam": "ipv4", 00:21:57.190 "trsvcid": "$NVMF_PORT", 00:21:57.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.190 "hdgst": ${hdgst:-false}, 00:21:57.190 "ddgst": ${ddgst:-false} 00:21:57.190 }, 00:21:57.190 "method": "bdev_nvme_attach_controller" 00:21:57.190 } 00:21:57.190 EOF 00:21:57.190 )") 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:57.190 { 00:21:57.190 "params": { 00:21:57.190 "name": "Nvme$subsystem", 00:21:57.190 "trtype": "$TEST_TRANSPORT", 00:21:57.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.190 "adrfam": "ipv4", 00:21:57.190 "trsvcid": "$NVMF_PORT", 00:21:57.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.190 "hdgst": ${hdgst:-false}, 00:21:57.190 "ddgst": ${ddgst:-false} 00:21:57.190 }, 00:21:57.190 "method": "bdev_nvme_attach_controller" 00:21:57.190 } 00:21:57.190 EOF 00:21:57.190 )") 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:57.190 { 00:21:57.190 "params": { 00:21:57.190 "name": "Nvme$subsystem", 00:21:57.190 "trtype": "$TEST_TRANSPORT", 00:21:57.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.190 "adrfam": "ipv4", 00:21:57.190 "trsvcid": "$NVMF_PORT", 00:21:57.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.190 "hdgst": ${hdgst:-false}, 00:21:57.190 "ddgst": ${ddgst:-false} 00:21:57.190 }, 00:21:57.190 "method": "bdev_nvme_attach_controller" 00:21:57.190 } 00:21:57.190 EOF 00:21:57.190 )") 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:57.190 { 00:21:57.190 "params": { 00:21:57.190 "name": "Nvme$subsystem", 00:21:57.190 "trtype": "$TEST_TRANSPORT", 00:21:57.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.190 "adrfam": "ipv4", 00:21:57.190 "trsvcid": "$NVMF_PORT", 00:21:57.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.190 "hdgst": ${hdgst:-false}, 00:21:57.190 "ddgst": ${ddgst:-false} 00:21:57.190 }, 00:21:57.190 "method": "bdev_nvme_attach_controller" 00:21:57.190 } 00:21:57.190 EOF 00:21:57.190 )") 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:57.190 [2024-10-08 18:29:50.280053] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:21:57.190 [2024-10-08 18:29:50.280100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481300 ] 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:57.190 { 00:21:57.190 "params": { 00:21:57.190 "name": "Nvme$subsystem", 00:21:57.190 "trtype": "$TEST_TRANSPORT", 00:21:57.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.190 "adrfam": "ipv4", 00:21:57.190 "trsvcid": "$NVMF_PORT", 00:21:57.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.190 "hdgst": ${hdgst:-false}, 00:21:57.190 "ddgst": ${ddgst:-false} 00:21:57.190 }, 00:21:57.190 "method": "bdev_nvme_attach_controller" 00:21:57.190 } 00:21:57.190 EOF 00:21:57.190 )") 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:57.190 { 00:21:57.190 "params": { 00:21:57.190 "name": "Nvme$subsystem", 00:21:57.190 "trtype": "$TEST_TRANSPORT", 00:21:57.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.190 "adrfam": "ipv4", 00:21:57.190 "trsvcid": "$NVMF_PORT", 00:21:57.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.190 "hdgst": ${hdgst:-false}, 00:21:57.190 "ddgst": ${ddgst:-false} 00:21:57.190 }, 00:21:57.190 "method": "bdev_nvme_attach_controller" 00:21:57.190 } 00:21:57.190 EOF 00:21:57.190 )") 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:57.190 { 00:21:57.190 "params": { 00:21:57.190 "name": "Nvme$subsystem", 00:21:57.190 "trtype": "$TEST_TRANSPORT", 00:21:57.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.190 "adrfam": "ipv4", 00:21:57.190 "trsvcid": "$NVMF_PORT", 00:21:57.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.190 "hdgst": ${hdgst:-false}, 00:21:57.190 "ddgst": ${ddgst:-false} 00:21:57.190 }, 00:21:57.190 "method": "bdev_nvme_attach_controller" 00:21:57.190 } 00:21:57.190 EOF 00:21:57.190 )") 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:57.190 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:57.190 { 00:21:57.190 "params": { 00:21:57.190 "name": "Nvme$subsystem", 00:21:57.190 "trtype": "$TEST_TRANSPORT", 00:21:57.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.190 "adrfam": "ipv4", 00:21:57.191 "trsvcid": "$NVMF_PORT", 00:21:57.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.191 "hdgst": ${hdgst:-false}, 00:21:57.191 "ddgst": ${ddgst:-false} 00:21:57.191 }, 00:21:57.191 "method": "bdev_nvme_attach_controller" 00:21:57.191 } 00:21:57.191 EOF 00:21:57.191 )") 00:21:57.191 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:57.191 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:21:57.191 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:21:57.191 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:57.191 "params": { 00:21:57.191 "name": "Nvme1", 00:21:57.191 "trtype": "tcp", 00:21:57.191 "traddr": "10.0.0.2", 00:21:57.191 "adrfam": "ipv4", 00:21:57.191 "trsvcid": "4420", 00:21:57.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:57.191 "hdgst": false, 00:21:57.191 "ddgst": false 00:21:57.191 }, 00:21:57.191 "method": "bdev_nvme_attach_controller" 00:21:57.191 },{ 00:21:57.191 "params": { 00:21:57.191 "name": "Nvme2", 00:21:57.191 "trtype": "tcp", 00:21:57.191 "traddr": "10.0.0.2", 00:21:57.191 "adrfam": "ipv4", 00:21:57.191 "trsvcid": "4420", 00:21:57.191 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:57.191 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:57.191 "hdgst": false, 00:21:57.191 "ddgst": false 00:21:57.191 }, 00:21:57.191 "method": "bdev_nvme_attach_controller" 00:21:57.191 },{ 00:21:57.191 "params": { 00:21:57.191 "name": "Nvme3", 00:21:57.191 "trtype": "tcp", 00:21:57.191 "traddr": "10.0.0.2", 00:21:57.191 "adrfam": "ipv4", 00:21:57.191 "trsvcid": "4420", 00:21:57.191 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:57.191 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:57.191 "hdgst": false, 00:21:57.191 "ddgst": false 00:21:57.191 }, 00:21:57.191 "method": "bdev_nvme_attach_controller" 00:21:57.191 },{ 00:21:57.191 "params": { 00:21:57.191 "name": "Nvme4", 00:21:57.191 "trtype": "tcp", 00:21:57.191 "traddr": "10.0.0.2", 00:21:57.191 "adrfam": "ipv4", 00:21:57.191 "trsvcid": "4420", 00:21:57.191 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:57.191 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:57.191 "hdgst": false, 00:21:57.191 "ddgst": false 00:21:57.191 }, 00:21:57.191 "method": "bdev_nvme_attach_controller" 00:21:57.191 },{ 00:21:57.191 "params": { 00:21:57.191 "name": "Nvme5", 00:21:57.191 "trtype": "tcp", 00:21:57.191 "traddr": "10.0.0.2", 00:21:57.191 "adrfam": "ipv4", 00:21:57.191 "trsvcid": "4420", 00:21:57.191 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:57.191 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:57.191 "hdgst": false, 00:21:57.191 "ddgst": false 00:21:57.191 }, 00:21:57.191 "method": "bdev_nvme_attach_controller" 00:21:57.191 },{ 00:21:57.191 "params": { 00:21:57.191 "name": "Nvme6", 00:21:57.191 "trtype": "tcp", 00:21:57.191 "traddr": "10.0.0.2", 00:21:57.191 "adrfam": "ipv4", 00:21:57.191 "trsvcid": "4420", 00:21:57.191 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:57.191 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:57.191 "hdgst": false, 00:21:57.191 "ddgst": false 00:21:57.191 }, 00:21:57.191 "method": "bdev_nvme_attach_controller" 00:21:57.191 },{ 00:21:57.191 "params": { 00:21:57.191 "name": "Nvme7", 00:21:57.191 "trtype": "tcp", 00:21:57.191 "traddr": "10.0.0.2", 00:21:57.191 "adrfam": "ipv4", 00:21:57.191 "trsvcid": "4420", 00:21:57.191 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:57.191 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:57.191 "hdgst": false, 00:21:57.191 "ddgst": false 00:21:57.191 }, 00:21:57.191 "method": "bdev_nvme_attach_controller" 00:21:57.191 },{ 00:21:57.191 "params": { 00:21:57.191 "name": "Nvme8", 00:21:57.191 "trtype": "tcp", 00:21:57.191 "traddr": "10.0.0.2", 00:21:57.191 "adrfam": "ipv4", 00:21:57.191 "trsvcid": "4420", 00:21:57.191 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:57.191 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:57.191 "hdgst": false, 00:21:57.191 "ddgst": false 00:21:57.191 }, 00:21:57.191 "method": "bdev_nvme_attach_controller" 00:21:57.191 },{ 00:21:57.191 "params": { 00:21:57.191 "name": "Nvme9", 00:21:57.191 "trtype": "tcp", 00:21:57.191 "traddr": "10.0.0.2", 00:21:57.191 "adrfam": "ipv4", 00:21:57.191 "trsvcid": "4420", 00:21:57.191 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:57.191 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:57.191 "hdgst": false, 00:21:57.191 "ddgst": false 00:21:57.191 }, 00:21:57.191 "method": "bdev_nvme_attach_controller" 00:21:57.191 },{ 00:21:57.191 "params": { 00:21:57.191 "name": "Nvme10", 00:21:57.191 "trtype": "tcp", 00:21:57.191 "traddr": "10.0.0.2", 00:21:57.191 "adrfam": "ipv4", 00:21:57.191 "trsvcid": "4420", 00:21:57.191 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:57.191 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:57.191 "hdgst": false, 00:21:57.191 "ddgst": false 00:21:57.191 }, 00:21:57.191 "method": "bdev_nvme_attach_controller" 00:21:57.191 }' 00:21:57.191 [2024-10-08 18:29:50.351616] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.191 [2024-10-08 18:29:50.424931] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.570 Running I/O for 1 seconds... 00:21:59.766 2271.00 IOPS, 141.94 MiB/s 00:21:59.766 Latency(us) 00:21:59.766 [2024-10-08T16:29:53.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.766 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.766 Verification LBA range: start 0x0 length 0x400 00:21:59.766 Nvme1n1 : 1.08 237.32 14.83 0.00 0.00 267373.71 19473.55 217704.35 00:21:59.766 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.766 Verification LBA range: start 0x0 length 0x400 00:21:59.766 Nvme2n1 : 1.12 286.55 17.91 0.00 0.00 217502.87 15354.15 214708.42 00:21:59.766 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.766 Verification LBA range: start 0x0 length 0x400 00:21:59.766 Nvme3n1 : 1.08 314.00 19.62 0.00 0.00 191718.38 11359.57 216705.71 00:21:59.766 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.766 Verification LBA range: start 0x0 length 0x400 00:21:59.766 Nvme4n1 : 1.11 291.84 18.24 0.00 0.00 208195.56 7021.71 217704.35 00:21:59.766 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.766 Verification LBA range: start 0x0 length 0x400 00:21:59.766 Nvme5n1 : 1.07 239.38 14.96 0.00 0.00 249577.57 18724.57 219701.64 00:21:59.766 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.766 Verification LBA range: start 0x0 length 0x400 00:21:59.766 Nvme6n1 : 1.15 283.47 17.72 0.00 0.00 208588.63 3073.95 238675.87 00:21:59.766 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.766 Verification LBA range: start 0x0 length 0x400 00:21:59.766 Nvme7n1 : 1.12 285.86 17.87 0.00 0.00 203466.65 31956.60 195734.19 00:21:59.766 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.766 Verification LBA range: start 0x0 length 0x400 00:21:59.766 Nvme8n1 : 1.12 285.16 17.82 0.00 0.00 200762.51 16852.11 212711.13 00:21:59.766 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.766 Verification LBA range: start 0x0 length 0x400 00:21:59.766 Nvme9n1 : 1.15 281.36 17.59 0.00 0.00 201046.69 717.78 257650.10 00:21:59.766 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.766 Verification LBA range: start 0x0 length 0x400 00:21:59.766 Nvme10n1 : 1.16 331.40 20.71 0.00 0.00 168199.40 6023.07 222697.57 00:21:59.766 [2024-10-08T16:29:53.089Z] =================================================================================================================== 00:21:59.766 [2024-10-08T16:29:53.089Z] Total : 2836.35 177.27 0.00 0.00 208736.64 717.78 257650.10 00:22:00.025 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:00.025 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:00.025 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:00.025 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:00.025 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:00.025 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:00.025 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:00.025 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:00.025 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:00.025 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:00.026 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:00.026 rmmod nvme_tcp 00:22:00.026 rmmod nvme_fabrics 00:22:00.026 rmmod nvme_keyring 00:22:00.026 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:00.026 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:00.026 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:00.026 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 480535 ']' 00:22:00.026 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 480535 00:22:00.026 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 480535 ']' 00:22:00.026 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 480535 00:22:00.026 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:00.026 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:00.026 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 480535 00:22:00.026 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:00.026 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:00.026 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 480535' 00:22:00.026 killing process with pid 480535 00:22:00.026 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 480535 00:22:00.026 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 480535 00:22:00.594 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:00.594 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:00.594 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:00.594 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:00.594 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:22:00.594 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:00.594 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:22:00.594 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:00.594 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:00.594 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.594 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.594 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:02.502 00:22:02.502 real 0m15.642s 00:22:02.502 user 0m35.029s 00:22:02.502 sys 0m5.838s 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.502 ************************************ 00:22:02.502 END TEST nvmf_shutdown_tc1 00:22:02.502 ************************************ 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:02.502 ************************************ 00:22:02.502 START TEST nvmf_shutdown_tc2 00:22:02.502 ************************************ 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:02.502 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:02.502 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.502 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:02.503 Found net devices under 0000:86:00.0: cvl_0_0 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:02.503 Found net devices under 0000:86:00.1: cvl_0_1 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.503 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.763 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.763 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.763 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:02.763 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.763 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.763 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.763 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:02.763 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:02.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:22:02.763 00:22:02.763 --- 10.0.0.2 ping statistics --- 00:22:02.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.763 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:22:02.763 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:22:02.763 00:22:02.763 --- 10.0.0.1 ping statistics --- 00:22:02.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.763 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:22:02.763 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.763 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:22:02.763 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:02.763 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.763 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:02.763 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:02.763 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.763 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:02.763 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:03.022 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:03.022 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:03.022 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:03.022 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:03.022 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=482319 00:22:03.022 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 482319 00:22:03.022 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:03.022 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 482319 ']' 00:22:03.022 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.022 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.022 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.022 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.022 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:03.022 [2024-10-08 18:29:56.156329] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:22:03.022 [2024-10-08 18:29:56.156383] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.022 [2024-10-08 18:29:56.231195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:03.022 [2024-10-08 18:29:56.305197] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.022 [2024-10-08 18:29:56.305238] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.022 [2024-10-08 18:29:56.305244] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.022 [2024-10-08 18:29:56.305250] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.022 [2024-10-08 18:29:56.305255] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.022 [2024-10-08 18:29:56.306926] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.022 [2024-10-08 18:29:56.307036] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.022 [2024-10-08 18:29:56.307123] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.022 [2024-10-08 18:29:56.307124] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:22:03.959 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.959 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:03.959 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:03.959 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.959 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:03.959 [2024-10-08 18:29:57.022983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.959 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:03.959 Malloc1 00:22:03.959 [2024-10-08 18:29:57.114354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.959 Malloc2 00:22:03.959 Malloc3 00:22:03.959 Malloc4 00:22:03.959 Malloc5 00:22:04.217 Malloc6 00:22:04.217 Malloc7 00:22:04.217 Malloc8 00:22:04.217 Malloc9 00:22:04.217 Malloc10 00:22:04.217 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.217 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:04.217 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:04.217 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.477 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=482602 00:22:04.477 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 482602 /var/tmp/bdevperf.sock 00:22:04.477 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 482602 ']' 00:22:04.477 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:04.477 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:04.477 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:04.477 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:04.477 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:04.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:04.477 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:22:04.477 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:04.477 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:22:04.477 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.477 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:04.477 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:04.477 { 00:22:04.477 "params": { 00:22:04.477 "name": "Nvme$subsystem", 00:22:04.477 "trtype": "$TEST_TRANSPORT", 00:22:04.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.478 "adrfam": "ipv4", 00:22:04.478 "trsvcid": "$NVMF_PORT", 00:22:04.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.478 "hdgst": ${hdgst:-false}, 00:22:04.478 "ddgst": ${ddgst:-false} 00:22:04.478 }, 00:22:04.478 "method": "bdev_nvme_attach_controller" 00:22:04.478 } 00:22:04.478 EOF 00:22:04.478 )") 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:04.478 { 00:22:04.478 "params": { 00:22:04.478 "name": "Nvme$subsystem", 00:22:04.478 "trtype": "$TEST_TRANSPORT", 00:22:04.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.478 "adrfam": "ipv4", 00:22:04.478 "trsvcid": "$NVMF_PORT", 00:22:04.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.478 "hdgst": ${hdgst:-false}, 00:22:04.478 "ddgst": ${ddgst:-false} 00:22:04.478 }, 00:22:04.478 "method": "bdev_nvme_attach_controller" 00:22:04.478 } 00:22:04.478 EOF 00:22:04.478 )") 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:04.478 { 00:22:04.478 "params": { 00:22:04.478 "name": "Nvme$subsystem", 00:22:04.478 "trtype": "$TEST_TRANSPORT", 00:22:04.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.478 "adrfam": "ipv4", 00:22:04.478 "trsvcid": "$NVMF_PORT", 00:22:04.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.478 "hdgst": ${hdgst:-false}, 00:22:04.478 "ddgst": ${ddgst:-false} 00:22:04.478 }, 00:22:04.478 "method": "bdev_nvme_attach_controller" 00:22:04.478 } 00:22:04.478 EOF 00:22:04.478 )") 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:04.478 { 00:22:04.478 "params": { 00:22:04.478 "name": "Nvme$subsystem", 00:22:04.478 "trtype": "$TEST_TRANSPORT", 00:22:04.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.478 "adrfam": "ipv4", 00:22:04.478 "trsvcid": "$NVMF_PORT", 00:22:04.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.478 "hdgst": ${hdgst:-false}, 00:22:04.478 "ddgst": ${ddgst:-false} 00:22:04.478 }, 00:22:04.478 "method": "bdev_nvme_attach_controller" 00:22:04.478 } 00:22:04.478 EOF 00:22:04.478 )") 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:04.478 { 00:22:04.478 "params": { 00:22:04.478 "name": "Nvme$subsystem", 00:22:04.478 "trtype": "$TEST_TRANSPORT", 00:22:04.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.478 "adrfam": "ipv4", 00:22:04.478 "trsvcid": "$NVMF_PORT", 00:22:04.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.478 "hdgst": ${hdgst:-false}, 00:22:04.478 "ddgst": ${ddgst:-false} 00:22:04.478 }, 00:22:04.478 "method": "bdev_nvme_attach_controller" 00:22:04.478 } 00:22:04.478 EOF 00:22:04.478 )") 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:04.478 { 00:22:04.478 "params": { 00:22:04.478 "name": "Nvme$subsystem", 00:22:04.478 "trtype": "$TEST_TRANSPORT", 00:22:04.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.478 "adrfam": "ipv4", 00:22:04.478 "trsvcid": "$NVMF_PORT", 00:22:04.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.478 "hdgst": ${hdgst:-false}, 00:22:04.478 "ddgst": ${ddgst:-false} 00:22:04.478 }, 00:22:04.478 "method": "bdev_nvme_attach_controller" 00:22:04.478 } 00:22:04.478 EOF 00:22:04.478 )") 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:04.478 { 00:22:04.478 "params": { 00:22:04.478 "name": "Nvme$subsystem", 00:22:04.478 "trtype": "$TEST_TRANSPORT", 00:22:04.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.478 "adrfam": "ipv4", 00:22:04.478 "trsvcid": "$NVMF_PORT", 00:22:04.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.478 "hdgst": ${hdgst:-false}, 00:22:04.478 "ddgst": ${ddgst:-false} 00:22:04.478 }, 00:22:04.478 "method": "bdev_nvme_attach_controller" 00:22:04.478 } 00:22:04.478 EOF 00:22:04.478 )") 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:04.478 [2024-10-08 18:29:57.585484] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:22:04.478 [2024-10-08 18:29:57.585530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482602 ] 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:04.478 { 00:22:04.478 "params": { 00:22:04.478 "name": "Nvme$subsystem", 00:22:04.478 "trtype": "$TEST_TRANSPORT", 00:22:04.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.478 "adrfam": "ipv4", 00:22:04.478 "trsvcid": "$NVMF_PORT", 00:22:04.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.478 "hdgst": ${hdgst:-false}, 00:22:04.478 "ddgst": ${ddgst:-false} 00:22:04.478 }, 00:22:04.478 "method": "bdev_nvme_attach_controller" 00:22:04.478 } 00:22:04.478 EOF 00:22:04.478 )") 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:04.478 { 00:22:04.478 "params": { 00:22:04.478 "name": "Nvme$subsystem", 00:22:04.478 "trtype": "$TEST_TRANSPORT", 00:22:04.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.478 "adrfam": "ipv4", 00:22:04.478 "trsvcid": "$NVMF_PORT", 00:22:04.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.478 "hdgst": ${hdgst:-false}, 00:22:04.478 "ddgst": ${ddgst:-false} 00:22:04.478 }, 00:22:04.478 "method": "bdev_nvme_attach_controller" 00:22:04.478 } 00:22:04.478 EOF 00:22:04.478 )") 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:04.478 { 00:22:04.478 "params": { 00:22:04.478 "name": "Nvme$subsystem", 00:22:04.478 "trtype": "$TEST_TRANSPORT", 00:22:04.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.478 "adrfam": "ipv4", 00:22:04.478 "trsvcid": "$NVMF_PORT", 00:22:04.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.478 "hdgst": ${hdgst:-false}, 00:22:04.478 "ddgst": ${ddgst:-false} 00:22:04.478 }, 00:22:04.478 "method": "bdev_nvme_attach_controller" 00:22:04.478 } 00:22:04.478 EOF 00:22:04.478 )") 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:22:04.478 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:04.478 "params": { 00:22:04.478 "name": "Nvme1", 00:22:04.478 "trtype": "tcp", 00:22:04.478 "traddr": "10.0.0.2", 00:22:04.478 "adrfam": "ipv4", 00:22:04.478 "trsvcid": "4420", 00:22:04.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.478 "hdgst": false, 00:22:04.478 "ddgst": false 00:22:04.478 }, 00:22:04.478 "method": "bdev_nvme_attach_controller" 00:22:04.478 },{ 00:22:04.478 "params": { 00:22:04.478 "name": "Nvme2", 00:22:04.478 "trtype": "tcp", 00:22:04.478 "traddr": "10.0.0.2", 00:22:04.478 "adrfam": "ipv4", 00:22:04.478 "trsvcid": "4420", 00:22:04.478 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:04.478 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:04.478 "hdgst": false, 00:22:04.478 "ddgst": false 00:22:04.478 }, 00:22:04.478 "method": "bdev_nvme_attach_controller" 00:22:04.478 },{ 00:22:04.478 "params": { 00:22:04.479 "name": "Nvme3", 00:22:04.479 "trtype": "tcp", 00:22:04.479 "traddr": "10.0.0.2", 00:22:04.479 "adrfam": "ipv4", 00:22:04.479 "trsvcid": "4420", 00:22:04.479 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:04.479 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:04.479 "hdgst": false, 00:22:04.479 "ddgst": false 00:22:04.479 }, 00:22:04.479 "method": "bdev_nvme_attach_controller" 00:22:04.479 },{ 00:22:04.479 "params": { 00:22:04.479 "name": "Nvme4", 00:22:04.479 "trtype": "tcp", 00:22:04.479 "traddr": "10.0.0.2", 00:22:04.479 "adrfam": "ipv4", 00:22:04.479 "trsvcid": "4420", 00:22:04.479 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:04.479 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:04.479 "hdgst": false, 00:22:04.479 "ddgst": false 00:22:04.479 }, 00:22:04.479 "method": "bdev_nvme_attach_controller" 00:22:04.479 },{ 00:22:04.479 "params": { 00:22:04.479 "name": "Nvme5", 00:22:04.479 "trtype": "tcp", 00:22:04.479 "traddr": "10.0.0.2", 00:22:04.479 "adrfam": "ipv4", 00:22:04.479 "trsvcid": "4420", 00:22:04.479 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:04.479 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:04.479 "hdgst": false, 00:22:04.479 "ddgst": false 00:22:04.479 }, 00:22:04.479 "method": "bdev_nvme_attach_controller" 00:22:04.479 },{ 00:22:04.479 "params": { 00:22:04.479 "name": "Nvme6", 00:22:04.479 "trtype": "tcp", 00:22:04.479 "traddr": "10.0.0.2", 00:22:04.479 "adrfam": "ipv4", 00:22:04.479 "trsvcid": "4420", 00:22:04.479 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:04.479 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:04.479 "hdgst": false, 00:22:04.479 "ddgst": false 00:22:04.479 }, 00:22:04.479 "method": "bdev_nvme_attach_controller" 00:22:04.479 },{ 00:22:04.479 "params": { 00:22:04.479 "name": "Nvme7", 00:22:04.479 "trtype": "tcp", 00:22:04.479 "traddr": "10.0.0.2", 00:22:04.479 "adrfam": "ipv4", 00:22:04.479 "trsvcid": "4420", 00:22:04.479 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:04.479 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:04.479 "hdgst": false, 00:22:04.479 "ddgst": false 00:22:04.479 }, 00:22:04.479 "method": "bdev_nvme_attach_controller" 00:22:04.479 },{ 00:22:04.479 "params": { 00:22:04.479 "name": "Nvme8", 00:22:04.479 "trtype": "tcp", 00:22:04.479 "traddr": "10.0.0.2", 00:22:04.479 "adrfam": "ipv4", 00:22:04.479 "trsvcid": "4420", 00:22:04.479 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:04.479 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:04.479 "hdgst": false, 00:22:04.479 "ddgst": false 00:22:04.479 }, 00:22:04.479 "method": "bdev_nvme_attach_controller" 00:22:04.479 },{ 00:22:04.479 "params": { 00:22:04.479 "name": "Nvme9", 00:22:04.479 "trtype": "tcp", 00:22:04.479 "traddr": "10.0.0.2", 00:22:04.479 "adrfam": "ipv4", 00:22:04.479 "trsvcid": "4420", 00:22:04.479 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:04.479 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:04.479 "hdgst": false, 00:22:04.479 "ddgst": false 00:22:04.479 }, 00:22:04.479 "method": "bdev_nvme_attach_controller" 00:22:04.479 },{ 00:22:04.479 "params": { 00:22:04.479 "name": "Nvme10", 00:22:04.479 "trtype": "tcp", 00:22:04.479 "traddr": "10.0.0.2", 00:22:04.479 "adrfam": "ipv4", 00:22:04.479 "trsvcid": "4420", 00:22:04.479 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:04.479 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:04.479 "hdgst": false, 00:22:04.479 "ddgst": false 00:22:04.479 }, 00:22:04.479 "method": "bdev_nvme_attach_controller" 00:22:04.479 }' 00:22:04.479 [2024-10-08 18:29:57.655093] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.479 [2024-10-08 18:29:57.726496] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.856 Running I/O for 10 seconds... 00:22:05.856 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.856 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:05.856 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:05.856 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.856 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:06.116 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.116 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:06.116 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:06.116 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:06.116 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:06.116 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:06.116 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:06.116 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:06.116 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:06.116 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.116 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:06.116 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:06.116 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.116 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:06.116 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:06.116 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:06.375 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:06.376 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:06.376 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:06.376 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:06.376 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.376 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:06.376 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.376 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:06.376 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:06.376 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:06.634 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:06.634 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:06.635 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:06.635 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:06.635 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.635 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:06.635 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.635 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:22:06.635 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:22:06.635 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:06.635 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:06.635 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:06.635 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 482602 00:22:06.635 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 482602 ']' 00:22:06.635 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 482602 00:22:06.635 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:06.635 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.635 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 482602 00:22:06.893 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:06.893 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:06.893 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 482602' 00:22:06.893 killing process with pid 482602 00:22:06.893 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 482602 00:22:06.893 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 482602 00:22:06.893 Received shutdown signal, test time was about 0.915344 seconds 00:22:06.893 00:22:06.893 Latency(us) 00:22:06.893 [2024-10-08T16:30:00.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.893 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.893 Verification LBA range: start 0x0 length 0x400 00:22:06.893 Nvme1n1 : 0.90 285.09 17.82 0.00 0.00 222193.86 15166.90 214708.42 00:22:06.893 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.893 Verification LBA range: start 0x0 length 0x400 00:22:06.893 Nvme2n1 : 0.89 286.47 17.90 0.00 0.00 217175.28 17975.59 210713.84 00:22:06.893 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.893 Verification LBA range: start 0x0 length 0x400 00:22:06.893 Nvme3n1 : 0.88 301.00 18.81 0.00 0.00 201291.80 7552.24 209715.20 00:22:06.893 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.893 Verification LBA range: start 0x0 length 0x400 00:22:06.893 Nvme4n1 : 0.89 288.74 18.05 0.00 0.00 207450.58 13544.11 216705.71 00:22:06.893 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.893 Verification LBA range: start 0x0 length 0x400 00:22:06.893 Nvme5n1 : 0.90 284.21 17.76 0.00 0.00 207263.94 16727.28 210713.84 00:22:06.893 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.893 Verification LBA range: start 0x0 length 0x400 00:22:06.893 Nvme6n1 : 0.91 280.47 17.53 0.00 0.00 206534.46 30208.98 198730.12 00:22:06.893 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.893 Verification LBA range: start 0x0 length 0x400 00:22:06.893 Nvme7n1 : 0.91 281.78 17.61 0.00 0.00 201668.51 12982.37 219701.64 00:22:06.893 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.893 Verification LBA range: start 0x0 length 0x400 00:22:06.893 Nvme8n1 : 0.90 282.97 17.69 0.00 0.00 196972.25 15541.39 203723.34 00:22:06.893 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.893 Verification LBA range: start 0x0 length 0x400 00:22:06.893 Nvme9n1 : 0.91 279.88 17.49 0.00 0.00 195563.28 16727.28 214708.42 00:22:06.893 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.893 Verification LBA range: start 0x0 length 0x400 00:22:06.893 Nvme10n1 : 0.88 218.04 13.63 0.00 0.00 244359.88 18474.91 234681.30 00:22:06.893 [2024-10-08T16:30:00.216Z] =================================================================================================================== 00:22:06.893 [2024-10-08T16:30:00.216Z] Total : 2788.64 174.29 0.00 0.00 209142.41 7552.24 234681.30 00:22:07.152 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 482319 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:08.089 rmmod nvme_tcp 00:22:08.089 rmmod nvme_fabrics 00:22:08.089 rmmod nvme_keyring 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 482319 ']' 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 482319 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 482319 ']' 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 482319 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 482319 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 482319' 00:22:08.089 killing process with pid 482319 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 482319 00:22:08.089 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 482319 00:22:08.656 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:08.656 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:08.656 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:08.656 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:08.656 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:22:08.656 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:08.656 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:22:08.656 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:08.656 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:08.656 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.656 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.656 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.562 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:10.562 00:22:10.562 real 0m8.090s 00:22:10.562 user 0m24.541s 00:22:10.562 sys 0m1.415s 00:22:10.562 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:10.562 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.562 ************************************ 00:22:10.562 END TEST nvmf_shutdown_tc2 00:22:10.562 ************************************ 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:10.822 ************************************ 00:22:10.822 START TEST nvmf_shutdown_tc3 00:22:10.822 ************************************ 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:10.822 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:10.823 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:10.823 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:10.823 Found net devices under 0000:86:00.0: cvl_0_0 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:10.823 Found net devices under 0000:86:00.1: cvl_0_1 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.823 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.823 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.823 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.823 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:10.823 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:11.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:22:11.083 00:22:11.083 --- 10.0.0.2 ping statistics --- 00:22:11.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.083 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:22:11.083 00:22:11.083 --- 10.0.0.1 ping statistics --- 00:22:11.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.083 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=483910 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 483910 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 483910 ']' 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:11.083 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:11.083 [2024-10-08 18:30:04.336705] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:22:11.083 [2024-10-08 18:30:04.336751] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.342 [2024-10-08 18:30:04.405996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:11.342 [2024-10-08 18:30:04.483855] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.342 [2024-10-08 18:30:04.483892] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.342 [2024-10-08 18:30:04.483899] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.342 [2024-10-08 18:30:04.483906] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.342 [2024-10-08 18:30:04.483911] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.342 [2024-10-08 18:30:04.485450] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.342 [2024-10-08 18:30:04.485557] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.342 [2024-10-08 18:30:04.485665] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.342 [2024-10-08 18:30:04.485666] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:22:11.910 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:11.910 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:11.910 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:11.910 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:11.910 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:11.910 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.910 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:11.910 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.910 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:11.910 [2024-10-08 18:30:05.223856] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.910 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.910 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:11.910 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:11.910 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:11.910 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.170 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:12.170 Malloc1 00:22:12.170 [2024-10-08 18:30:05.323429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.170 Malloc2 00:22:12.170 Malloc3 00:22:12.170 Malloc4 00:22:12.170 Malloc5 00:22:12.429 Malloc6 00:22:12.429 Malloc7 00:22:12.429 Malloc8 00:22:12.429 Malloc9 00:22:12.429 Malloc10 00:22:12.429 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.429 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:12.429 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:12.429 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=484220 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 484220 /var/tmp/bdevperf.sock 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 484220 ']' 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.689 { 00:22:12.689 "params": { 00:22:12.689 "name": "Nvme$subsystem", 00:22:12.689 "trtype": "$TEST_TRANSPORT", 00:22:12.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.689 "adrfam": "ipv4", 00:22:12.689 "trsvcid": "$NVMF_PORT", 00:22:12.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.689 "hdgst": ${hdgst:-false}, 00:22:12.689 "ddgst": ${ddgst:-false} 00:22:12.689 }, 00:22:12.689 "method": "bdev_nvme_attach_controller" 00:22:12.689 } 00:22:12.689 EOF 00:22:12.689 )") 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.689 { 00:22:12.689 "params": { 00:22:12.689 "name": "Nvme$subsystem", 00:22:12.689 "trtype": "$TEST_TRANSPORT", 00:22:12.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.689 "adrfam": "ipv4", 00:22:12.689 "trsvcid": "$NVMF_PORT", 00:22:12.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.689 "hdgst": ${hdgst:-false}, 00:22:12.689 "ddgst": ${ddgst:-false} 00:22:12.689 }, 00:22:12.689 "method": "bdev_nvme_attach_controller" 00:22:12.689 } 00:22:12.689 EOF 00:22:12.689 )") 00:22:12.689 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.690 { 00:22:12.690 "params": { 00:22:12.690 "name": "Nvme$subsystem", 00:22:12.690 "trtype": "$TEST_TRANSPORT", 00:22:12.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.690 "adrfam": "ipv4", 00:22:12.690 "trsvcid": "$NVMF_PORT", 00:22:12.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.690 "hdgst": ${hdgst:-false}, 00:22:12.690 "ddgst": ${ddgst:-false} 00:22:12.690 }, 00:22:12.690 "method": "bdev_nvme_attach_controller" 00:22:12.690 } 00:22:12.690 EOF 00:22:12.690 )") 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.690 { 00:22:12.690 "params": { 00:22:12.690 "name": "Nvme$subsystem", 00:22:12.690 "trtype": "$TEST_TRANSPORT", 00:22:12.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.690 "adrfam": "ipv4", 00:22:12.690 "trsvcid": "$NVMF_PORT", 00:22:12.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.690 "hdgst": ${hdgst:-false}, 00:22:12.690 "ddgst": ${ddgst:-false} 00:22:12.690 }, 00:22:12.690 "method": "bdev_nvme_attach_controller" 00:22:12.690 } 00:22:12.690 EOF 00:22:12.690 )") 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.690 { 00:22:12.690 "params": { 00:22:12.690 "name": "Nvme$subsystem", 00:22:12.690 "trtype": "$TEST_TRANSPORT", 00:22:12.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.690 "adrfam": "ipv4", 00:22:12.690 "trsvcid": "$NVMF_PORT", 00:22:12.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.690 "hdgst": ${hdgst:-false}, 00:22:12.690 "ddgst": ${ddgst:-false} 00:22:12.690 }, 00:22:12.690 "method": "bdev_nvme_attach_controller" 00:22:12.690 } 00:22:12.690 EOF 00:22:12.690 )") 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.690 { 00:22:12.690 "params": { 00:22:12.690 "name": "Nvme$subsystem", 00:22:12.690 "trtype": "$TEST_TRANSPORT", 00:22:12.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.690 "adrfam": "ipv4", 00:22:12.690 "trsvcid": "$NVMF_PORT", 00:22:12.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.690 "hdgst": ${hdgst:-false}, 00:22:12.690 "ddgst": ${ddgst:-false} 00:22:12.690 }, 00:22:12.690 "method": "bdev_nvme_attach_controller" 00:22:12.690 } 00:22:12.690 EOF 00:22:12.690 )") 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.690 { 00:22:12.690 "params": { 00:22:12.690 "name": "Nvme$subsystem", 00:22:12.690 "trtype": "$TEST_TRANSPORT", 00:22:12.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.690 "adrfam": "ipv4", 00:22:12.690 "trsvcid": "$NVMF_PORT", 00:22:12.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.690 "hdgst": ${hdgst:-false}, 00:22:12.690 "ddgst": ${ddgst:-false} 00:22:12.690 }, 00:22:12.690 "method": "bdev_nvme_attach_controller" 00:22:12.690 } 00:22:12.690 EOF 00:22:12.690 )") 00:22:12.690 [2024-10-08 18:30:05.802065] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:22:12.690 [2024-10-08 18:30:05.802117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484220 ] 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.690 { 00:22:12.690 "params": { 00:22:12.690 "name": "Nvme$subsystem", 00:22:12.690 "trtype": "$TEST_TRANSPORT", 00:22:12.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.690 "adrfam": "ipv4", 00:22:12.690 "trsvcid": "$NVMF_PORT", 00:22:12.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.690 "hdgst": ${hdgst:-false}, 00:22:12.690 "ddgst": ${ddgst:-false} 00:22:12.690 }, 00:22:12.690 "method": "bdev_nvme_attach_controller" 00:22:12.690 } 00:22:12.690 EOF 00:22:12.690 )") 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.690 { 00:22:12.690 "params": { 00:22:12.690 "name": "Nvme$subsystem", 00:22:12.690 "trtype": "$TEST_TRANSPORT", 00:22:12.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.690 "adrfam": "ipv4", 00:22:12.690 "trsvcid": "$NVMF_PORT", 00:22:12.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.690 "hdgst": ${hdgst:-false}, 00:22:12.690 "ddgst": ${ddgst:-false} 00:22:12.690 }, 00:22:12.690 "method": "bdev_nvme_attach_controller" 00:22:12.690 } 00:22:12.690 EOF 00:22:12.690 )") 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.690 { 00:22:12.690 "params": { 00:22:12.690 "name": "Nvme$subsystem", 00:22:12.690 "trtype": "$TEST_TRANSPORT", 00:22:12.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.690 "adrfam": "ipv4", 00:22:12.690 "trsvcid": "$NVMF_PORT", 00:22:12.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.690 "hdgst": ${hdgst:-false}, 00:22:12.690 "ddgst": ${ddgst:-false} 00:22:12.690 }, 00:22:12.690 "method": "bdev_nvme_attach_controller" 00:22:12.690 } 00:22:12.690 EOF 00:22:12.690 )") 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:22:12.690 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:12.690 "params": { 00:22:12.690 "name": "Nvme1", 00:22:12.690 "trtype": "tcp", 00:22:12.690 "traddr": "10.0.0.2", 00:22:12.690 "adrfam": "ipv4", 00:22:12.690 "trsvcid": "4420", 00:22:12.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.690 "hdgst": false, 00:22:12.690 "ddgst": false 00:22:12.690 }, 00:22:12.690 "method": "bdev_nvme_attach_controller" 00:22:12.690 },{ 00:22:12.690 "params": { 00:22:12.690 "name": "Nvme2", 00:22:12.690 "trtype": "tcp", 00:22:12.690 "traddr": "10.0.0.2", 00:22:12.690 "adrfam": "ipv4", 00:22:12.690 "trsvcid": "4420", 00:22:12.690 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:12.690 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:12.690 "hdgst": false, 00:22:12.690 "ddgst": false 00:22:12.690 }, 00:22:12.690 "method": "bdev_nvme_attach_controller" 00:22:12.690 },{ 00:22:12.690 "params": { 00:22:12.690 "name": "Nvme3", 00:22:12.690 "trtype": "tcp", 00:22:12.690 "traddr": "10.0.0.2", 00:22:12.690 "adrfam": "ipv4", 00:22:12.690 "trsvcid": "4420", 00:22:12.690 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:12.690 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:12.690 "hdgst": false, 00:22:12.690 "ddgst": false 00:22:12.690 }, 00:22:12.690 "method": "bdev_nvme_attach_controller" 00:22:12.690 },{ 00:22:12.690 "params": { 00:22:12.690 "name": "Nvme4", 00:22:12.690 "trtype": "tcp", 00:22:12.690 "traddr": "10.0.0.2", 00:22:12.690 "adrfam": "ipv4", 00:22:12.690 "trsvcid": "4420", 00:22:12.690 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:12.690 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:12.690 "hdgst": false, 00:22:12.690 "ddgst": false 00:22:12.690 }, 00:22:12.690 "method": "bdev_nvme_attach_controller" 00:22:12.690 },{ 00:22:12.690 "params": { 00:22:12.690 "name": "Nvme5", 00:22:12.690 "trtype": "tcp", 00:22:12.690 "traddr": "10.0.0.2", 00:22:12.690 "adrfam": "ipv4", 00:22:12.690 "trsvcid": "4420", 00:22:12.690 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:12.690 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:12.690 "hdgst": false, 00:22:12.691 "ddgst": false 00:22:12.691 }, 00:22:12.691 "method": "bdev_nvme_attach_controller" 00:22:12.691 },{ 00:22:12.691 "params": { 00:22:12.691 "name": "Nvme6", 00:22:12.691 "trtype": "tcp", 00:22:12.691 "traddr": "10.0.0.2", 00:22:12.691 "adrfam": "ipv4", 00:22:12.691 "trsvcid": "4420", 00:22:12.691 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:12.691 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:12.691 "hdgst": false, 00:22:12.691 "ddgst": false 00:22:12.691 }, 00:22:12.691 "method": "bdev_nvme_attach_controller" 00:22:12.691 },{ 00:22:12.691 "params": { 00:22:12.691 "name": "Nvme7", 00:22:12.691 "trtype": "tcp", 00:22:12.691 "traddr": "10.0.0.2", 00:22:12.691 "adrfam": "ipv4", 00:22:12.691 "trsvcid": "4420", 00:22:12.691 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:12.691 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:12.691 "hdgst": false, 00:22:12.691 "ddgst": false 00:22:12.691 }, 00:22:12.691 "method": "bdev_nvme_attach_controller" 00:22:12.691 },{ 00:22:12.691 "params": { 00:22:12.691 "name": "Nvme8", 00:22:12.691 "trtype": "tcp", 00:22:12.691 "traddr": "10.0.0.2", 00:22:12.691 "adrfam": "ipv4", 00:22:12.691 "trsvcid": "4420", 00:22:12.691 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:12.691 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:12.691 "hdgst": false, 00:22:12.691 "ddgst": false 00:22:12.691 }, 00:22:12.691 "method": "bdev_nvme_attach_controller" 00:22:12.691 },{ 00:22:12.691 "params": { 00:22:12.691 "name": "Nvme9", 00:22:12.691 "trtype": "tcp", 00:22:12.691 "traddr": "10.0.0.2", 00:22:12.691 "adrfam": "ipv4", 00:22:12.691 "trsvcid": "4420", 00:22:12.691 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:12.691 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:12.691 "hdgst": false, 00:22:12.691 "ddgst": false 00:22:12.691 }, 00:22:12.691 "method": "bdev_nvme_attach_controller" 00:22:12.691 },{ 00:22:12.691 "params": { 00:22:12.691 "name": "Nvme10", 00:22:12.691 "trtype": "tcp", 00:22:12.691 "traddr": "10.0.0.2", 00:22:12.691 "adrfam": "ipv4", 00:22:12.691 "trsvcid": "4420", 00:22:12.691 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:12.691 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:12.691 "hdgst": false, 00:22:12.691 "ddgst": false 00:22:12.691 }, 00:22:12.691 "method": "bdev_nvme_attach_controller" 00:22:12.691 }' 00:22:12.691 [2024-10-08 18:30:05.873690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.691 [2024-10-08 18:30:05.945320] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.596 Running I/O for 10 seconds... 00:22:15.167 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:15.167 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:15.167 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:15.167 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 483910 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 483910 ']' 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 483910 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:15.168 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 483910 00:22:15.441 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:15.441 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:15.441 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 483910' 00:22:15.441 killing process with pid 483910 00:22:15.441 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 483910 00:22:15.441 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 483910 00:22:15.441 [2024-10-08 18:30:08.499032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.441 [2024-10-08 18:30:08.499083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.441 [2024-10-08 18:30:08.499091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.441 [2024-10-08 18:30:08.499098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.441 [2024-10-08 18:30:08.499105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.441 [2024-10-08 18:30:08.499111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.441 [2024-10-08 18:30:08.499117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.441 [2024-10-08 18:30:08.499124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.441 [2024-10-08 18:30:08.499130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.441 [2024-10-08 18:30:08.499138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.499504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97030 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.442 [2024-10-08 18:30:08.500812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.500982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a9a0 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.443 [2024-10-08 18:30:08.502308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.502314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.502322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.502329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.502336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.502343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.502349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.502355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97520 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.503969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa979f0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.444 [2024-10-08 18:30:08.504827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.504996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97ee0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.445 [2024-10-08 18:30:08.505949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.505954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.505961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.505966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.505972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.505979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.505985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.505992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.505998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983b0 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.506997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.446 [2024-10-08 18:30:08.507157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.507163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.507169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.507175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.507180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.507187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.507193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.507199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.507206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.507212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.507218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.507224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.507229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.507237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.507244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98880 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.508663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.508694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.508704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.508711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.508719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.508726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.508733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.508740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.508746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca5770 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.508785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.508796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.508803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.508811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.508819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.508827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.508835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.508842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.508850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x84d6d0 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.508874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.508884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.508894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.508903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.508911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.508918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.508929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.508937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.508944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854650 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.508968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.508977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.508985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.508993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.509001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.509008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.509016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.509025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.509032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc793c0 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.509056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.509066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.509075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.509083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.509091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.509098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.509107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.509115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.509122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x858270 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.509144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.509153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.509162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.509169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.509178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.509186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.509195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.509204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.509210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x857e10 is same with the state(6) to be set 00:22:15.447 [2024-10-08 18:30:08.509238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.509247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.509256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.509264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.509270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.509278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.509287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.447 [2024-10-08 18:30:08.509294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.447 [2024-10-08 18:30:08.509301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d530 is same with the state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.509395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98d50 is same with the state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.509415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98d50 is same with the state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.509423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98d50 is same with the state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.509429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98d50 is same with the state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.509436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98d50 is same with the state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.509948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.509971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.509985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.509993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with t[2024-10-08 18:30:08.510307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:1he state(6) to be set 00:22:15.448 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with the state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.510328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1[2024-10-08 18:30:08.510329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 he state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.510337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-08 18:30:08.510338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 he state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.510347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with t[2024-10-08 18:30:08.510347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:1he state(6) to be set 00:22:15.448 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with the state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.510358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with the state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.510368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with the state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.510385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with the state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.510401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with the state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.510410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with the state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.510418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with the state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.510427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with t[2024-10-08 18:30:08.510427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1he state(6) to be set 00:22:15.448 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with the state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.510437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with the state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.510446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.448 [2024-10-08 18:30:08.510449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with the state(6) to be set 00:22:15.448 [2024-10-08 18:30:08.510454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.448 [2024-10-08 18:30:08.510456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with the state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1[2024-10-08 18:30:08.510463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99240 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 he state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with t[2024-10-08 18:30:08.510783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1he state(6) to be set 00:22:15.449 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:1[2024-10-08 18:30:08.510817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 he state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-08 18:30:08.510826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 he state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1[2024-10-08 18:30:08.510856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 he state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-08 18:30:08.510866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 he state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with t[2024-10-08 18:30:08.510927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1he state(6) to be set 00:22:15.449 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 [2024-10-08 18:30:08.510935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with t[2024-10-08 18:30:08.510936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:15.449 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 [2024-10-08 18:30:08.510944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1[2024-10-08 18:30:08.510951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.449 he state(6) to be set 00:22:15.449 [2024-10-08 18:30:08.510960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-08 18:30:08.510960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.449 he state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.510970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.510971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.450 [2024-10-08 18:30:08.510976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.510978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.450 [2024-10-08 18:30:08.510983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.510988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.450 [2024-10-08 18:30:08.510990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.510995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.450 [2024-10-08 18:30:08.510997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with t[2024-10-08 18:30:08.511004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1he state(6) to be set 00:22:15.450 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.450 [2024-10-08 18:30:08.511012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with t[2024-10-08 18:30:08.511014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:15.450 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.450 [2024-10-08 18:30:08.511021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devi[2024-10-08 18:30:08.511041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with tce or address) on qpair id 1 00:22:15.450 he state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511095] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc5f080 was disconnected and fre[2024-10-08 18:30:08.511097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with ted. reset controller. 00:22:15.450 he state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.511210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.450 [2024-10-08 18:30:08.512522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.450 [2024-10-08 18:30:08.512542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.450 [2024-10-08 18:30:08.512555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.450 [2024-10-08 18:30:08.512563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.450 [2024-10-08 18:30:08.512572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.450 [2024-10-08 18:30:08.512579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.450 [2024-10-08 18:30:08.512591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.450 [2024-10-08 18:30:08.512598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.450 [2024-10-08 18:30:08.512607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.450 [2024-10-08 18:30:08.512614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.450 [2024-10-08 18:30:08.512622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.450 [2024-10-08 18:30:08.512629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.450 [2024-10-08 18:30:08.512638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.450 [2024-10-08 18:30:08.512645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.450 [2024-10-08 18:30:08.512653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.450 [2024-10-08 18:30:08.512660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.450 [2024-10-08 18:30:08.512668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.450 [2024-10-08 18:30:08.512675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.450 [2024-10-08 18:30:08.512683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.450 [2024-10-08 18:30:08.512690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.450 [2024-10-08 18:30:08.512698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.450 [2024-10-08 18:30:08.512705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.450 [2024-10-08 18:30:08.512714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.450 [2024-10-08 18:30:08.512720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.450 [2024-10-08 18:30:08.512728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.450 [2024-10-08 18:30:08.512735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.450 [2024-10-08 18:30:08.512743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.450 [2024-10-08 18:30:08.512750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.450 [2024-10-08 18:30:08.512758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.512765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.512774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.512783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.512791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.512803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.512812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.512818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.512827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.512834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.512842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.512848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.512857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.512864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.512871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.512878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.512886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.512893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.512901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.512907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.512916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.512923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.512931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.512938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.512946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.512953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.512961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.512968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.512977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.512984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.512992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.512999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.513007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.513014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.513022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.513032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.513041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.513048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.513056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.513062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.513071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.513077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.513086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.513093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.513101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.513107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.513117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.513125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.513134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.513140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.513150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.513156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.513165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.513173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.513181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.513188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.513196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.513202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.522403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.451 [2024-10-08 18:30:08.522413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.451 [2024-10-08 18:30:08.522421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99710 is same with the state(6) to be set 00:22:15.451 [2024-10-08 18:30:08.526084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.526098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.526110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.526119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.526130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.526139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.526150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.526159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.526170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.526180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.526191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.526200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.526211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.526220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.526231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.526240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.526251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.451 [2024-10-08 18:30:08.526262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.451 [2024-10-08 18:30:08.526274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.526282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.526294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.526303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.526315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.526324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.526335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.526344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.526356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.526365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.526380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.526390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.526401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.526410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.526421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.526430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.526441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.526450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.526462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.526471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.526482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.526491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.526502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.526512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.526598] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ba3470 was disconnected and freed. reset controller. 00:22:15.452 [2024-10-08 18:30:08.526727] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:15.452 [2024-10-08 18:30:08.526785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76d610 (9): Bad file descriptor 00:22:15.452 [2024-10-08 18:30:08.526849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.452 [2024-10-08 18:30:08.526862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.526873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.452 [2024-10-08 18:30:08.526882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.526891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.452 [2024-10-08 18:30:08.526901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.526911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.452 [2024-10-08 18:30:08.526920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.526928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7c910 is same with the state(6) to be set 00:22:15.452 [2024-10-08 18:30:08.526943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca5770 (9): Bad file descriptor 00:22:15.452 [2024-10-08 18:30:08.526965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x84d6d0 (9): Bad file descriptor 00:22:15.452 [2024-10-08 18:30:08.526989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x854650 (9): Bad file descriptor 00:22:15.452 [2024-10-08 18:30:08.527008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc793c0 (9): Bad file descriptor 00:22:15.452 [2024-10-08 18:30:08.527028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x858270 (9): Bad file descriptor 00:22:15.452 [2024-10-08 18:30:08.527045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x857e10 (9): Bad file descriptor 00:22:15.452 [2024-10-08 18:30:08.527076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.452 [2024-10-08 18:30:08.527087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.527097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.452 [2024-10-08 18:30:08.527106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.527116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.452 [2024-10-08 18:30:08.527125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.527134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.452 [2024-10-08 18:30:08.527143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.527152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6350 is same with the state(6) to be set 00:22:15.452 [2024-10-08 18:30:08.527175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7d530 (9): Bad file descriptor 00:22:15.452 [2024-10-08 18:30:08.528857] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:15.452 [2024-10-08 18:30:08.528889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7c910 (9): Bad file descriptor 00:22:15.452 [2024-10-08 18:30:08.529583] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:15.452 [2024-10-08 18:30:08.529731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.452 [2024-10-08 18:30:08.529751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x76d610 with addr=10.0.0.2, port=4420 00:22:15.452 [2024-10-08 18:30:08.529763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76d610 is same with the state(6) to be set 00:22:15.452 [2024-10-08 18:30:08.529823] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:15.452 [2024-10-08 18:30:08.529888] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:15.452 [2024-10-08 18:30:08.529926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.529939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.529955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.529964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.529977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.529986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.529998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.530007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.530018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.530027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.530038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.530048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.530059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.530068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.530079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.530088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.530099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.530108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.452 [2024-10-08 18:30:08.530123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.452 [2024-10-08 18:30:08.530132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.453 [2024-10-08 18:30:08.530980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.453 [2024-10-08 18:30:08.530989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.531000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.531009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.531020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.531029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.531042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.531051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.531063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.531072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.531083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.531092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.531103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.531112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.531123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.531132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.531146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.531155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.531166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.531175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.531186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.531195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.531206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.531214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.531226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.531234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.531244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4f190 is same with the state(6) to be set 00:22:15.454 [2024-10-08 18:30:08.531302] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc4f190 was disconnected and freed. reset controller. 00:22:15.454 [2024-10-08 18:30:08.531359] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:15.454 [2024-10-08 18:30:08.531420] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:15.454 [2024-10-08 18:30:08.531951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.454 [2024-10-08 18:30:08.531971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc7c910 with addr=10.0.0.2, port=4420 00:22:15.454 [2024-10-08 18:30:08.531982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7c910 is same with the state(6) to be set 00:22:15.454 [2024-10-08 18:30:08.531995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76d610 (9): Bad file descriptor 00:22:15.454 [2024-10-08 18:30:08.532050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.454 [2024-10-08 18:30:08.532581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.454 [2024-10-08 18:30:08.532590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.532986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.532995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.533006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.533015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.533026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.533035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.533045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.533054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.533066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.533075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.533086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.533094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.533105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.533114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.533125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.533134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.533145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.533154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.533165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.533175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.533187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.533196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.533207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.533216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.533227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.533236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.533247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.533256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.455 [2024-10-08 18:30:08.533267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.455 [2024-10-08 18:30:08.533276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb94990 is same with the state(6) to be set 00:22:15.456 [2024-10-08 18:30:08.533429] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb94990 was disconnected and freed. reset controller. 00:22:15.456 [2024-10-08 18:30:08.533452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.533987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.533996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.534008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.534017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.534031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.534041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.534053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.534062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.534074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.534083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.534096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.534106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.534117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.534128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.534140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.534150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.534162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.534172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.534184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.534193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.534204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.534214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.456 [2024-10-08 18:30:08.534226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.456 [2024-10-08 18:30:08.534235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.457 [2024-10-08 18:30:08.534820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.457 [2024-10-08 18:30:08.534831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc41dc0 is same with the state(6) to be set 00:22:15.457 [2024-10-08 18:30:08.534900] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc41dc0 was disconnected and freed. reset controller. 00:22:15.457 [2024-10-08 18:30:08.536258] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:15.457 [2024-10-08 18:30:08.536296] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:15.457 [2024-10-08 18:30:08.536323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7c910 (9): Bad file descriptor 00:22:15.457 [2024-10-08 18:30:08.536336] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:15.457 [2024-10-08 18:30:08.536345] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:15.457 [2024-10-08 18:30:08.536356] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:15.457 [2024-10-08 18:30:08.538949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.457 [2024-10-08 18:30:08.538973] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:15.457 [2024-10-08 18:30:08.538987] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:15.457 [2024-10-08 18:30:08.539178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.457 [2024-10-08 18:30:08.539195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x854650 with addr=10.0.0.2, port=4420 00:22:15.457 [2024-10-08 18:30:08.539206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854650 is same with the state(6) to be set 00:22:15.457 [2024-10-08 18:30:08.539217] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:15.457 [2024-10-08 18:30:08.539225] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:15.457 [2024-10-08 18:30:08.539236] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:15.457 [2024-10-08 18:30:08.539281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6350 (9): Bad file descriptor 00:22:15.457 [2024-10-08 18:30:08.539657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.457 [2024-10-08 18:30:08.539853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.457 [2024-10-08 18:30:08.539870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x857e10 with addr=10.0.0.2, port=4420 00:22:15.457 [2024-10-08 18:30:08.539880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x857e10 is same with the state(6) to be set 00:22:15.457 [2024-10-08 18:30:08.539963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.457 [2024-10-08 18:30:08.539976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x84d6d0 with addr=10.0.0.2, port=4420 00:22:15.457 [2024-10-08 18:30:08.539986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x84d6d0 is same with the state(6) to be set 00:22:15.457 [2024-10-08 18:30:08.539998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x854650 (9): Bad file descriptor 00:22:15.457 [2024-10-08 18:30:08.540038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.458 [2024-10-08 18:30:08.540921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.458 [2024-10-08 18:30:08.540930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.540942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.540951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.540962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.540971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.540983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.540992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.541392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.541403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87150 is same with the state(6) to be set 00:22:15.459 [2024-10-08 18:30:08.543528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.543561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.543576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.543586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.543597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.543607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.543624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.543633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.543645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.543655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.543666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.543675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.543687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.543697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.543709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.543719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.543730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.543740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.543752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.543761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.543773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.543782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.543794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.459 [2024-10-08 18:30:08.543803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.459 [2024-10-08 18:30:08.543814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.543824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.543840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.543850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.543862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.543871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.543883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.543892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.543913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.543919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.543928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.543934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.543943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.543949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.543957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.543964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.543973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.543979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.543988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.543995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.460 [2024-10-08 18:30:08.544440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.460 [2024-10-08 18:30:08.544449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.544457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.544463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.544472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.544479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.544487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.544494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.544503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.544509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.544518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.544524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.544533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.544539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.544547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.544554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.544562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.544569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.544577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.544584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.544592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.544599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.544607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.544614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.544622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.544629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.544639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.544646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.544654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5c580 is same with the state(6) to be set 00:22:15.461 [2024-10-08 18:30:08.545637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.545990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.545999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.546007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.546014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.546022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.546030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.546038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.546045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.546053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.461 [2024-10-08 18:30:08.546059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.461 [2024-10-08 18:30:08.546068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.546606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.546613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5db00 is same with the state(6) to be set 00:22:15.462 [2024-10-08 18:30:08.547599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.547612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.547622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.547629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.547638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.547644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.547653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.547659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.547668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.462 [2024-10-08 18:30:08.547674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.462 [2024-10-08 18:30:08.547682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.547985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.547992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.463 [2024-10-08 18:30:08.548260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.463 [2024-10-08 18:30:08.548266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.548577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.548585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd807d0 is same with the state(6) to be set 00:22:15.464 [2024-10-08 18:30:08.549528] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:15.464 [2024-10-08 18:30:08.549544] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:15.464 [2024-10-08 18:30:08.549554] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:15.464 [2024-10-08 18:30:08.549564] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:15.464 [2024-10-08 18:30:08.549598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x857e10 (9): Bad file descriptor 00:22:15.464 [2024-10-08 18:30:08.549609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x84d6d0 (9): Bad file descriptor 00:22:15.464 [2024-10-08 18:30:08.549617] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:15.464 [2024-10-08 18:30:08.549625] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:15.464 [2024-10-08 18:30:08.549634] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:15.464 [2024-10-08 18:30:08.549665] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.464 [2024-10-08 18:30:08.549676] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.464 [2024-10-08 18:30:08.549694] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.464 [2024-10-08 18:30:08.549705] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.464 [2024-10-08 18:30:08.549770] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:15.464 [2024-10-08 18:30:08.549781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.464 [2024-10-08 18:30:08.549906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.464 [2024-10-08 18:30:08.549918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x76d610 with addr=10.0.0.2, port=4420 00:22:15.464 [2024-10-08 18:30:08.549926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76d610 is same with the state(6) to be set 00:22:15.464 [2024-10-08 18:30:08.550015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.464 [2024-10-08 18:30:08.550025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x858270 with addr=10.0.0.2, port=4420 00:22:15.464 [2024-10-08 18:30:08.550032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x858270 is same with the state(6) to be set 00:22:15.464 [2024-10-08 18:30:08.550143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.464 [2024-10-08 18:30:08.550153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc7d530 with addr=10.0.0.2, port=4420 00:22:15.464 [2024-10-08 18:30:08.550160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d530 is same with the state(6) to be set 00:22:15.464 [2024-10-08 18:30:08.550320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.464 [2024-10-08 18:30:08.550331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc793c0 with addr=10.0.0.2, port=4420 00:22:15.464 [2024-10-08 18:30:08.550338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc793c0 is same with the state(6) to be set 00:22:15.464 [2024-10-08 18:30:08.550345] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:15.464 [2024-10-08 18:30:08.550351] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:15.464 [2024-10-08 18:30:08.550358] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:15.464 [2024-10-08 18:30:08.550369] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:15.464 [2024-10-08 18:30:08.550381] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:15.464 [2024-10-08 18:30:08.550388] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:15.464 [2024-10-08 18:30:08.551090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.551104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.551116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.551123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.551134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.464 [2024-10-08 18:30:08.551141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.464 [2024-10-08 18:30:08.551151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.465 [2024-10-08 18:30:08.551800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.465 [2024-10-08 18:30:08.551807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.551815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.551822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.551830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.551837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.551845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.551852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.551860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.551866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.551875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.551881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.551890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.551897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.551905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.551911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.551919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.551926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.551934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.551941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.551949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.551955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.551964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.551970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.551978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.551986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.551994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.552001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.552009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.552016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.552024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.552031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.552039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.552046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.552054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.552061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.552069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.552076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.552084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.466 [2024-10-08 18:30:08.552091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.466 [2024-10-08 18:30:08.552099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7f250 is same with the state(6) to be set 00:22:15.466 [2024-10-08 18:30:08.553268] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:15.466 [2024-10-08 18:30:08.553284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.466 [2024-10-08 18:30:08.553291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.466 task offset: 24576 on job bdev=Nvme7n1 fails 00:22:15.466 00:22:15.466 Latency(us) 00:22:15.466 [2024-10-08T16:30:08.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.466 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.466 Job: Nvme1n1 ended in about 0.82 seconds with error 00:22:15.466 Verification LBA range: start 0x0 length 0x400 00:22:15.466 Nvme1n1 : 0.82 170.20 10.64 78.37 0.00 254718.74 22594.32 227690.79 00:22:15.466 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.466 Job: Nvme2n1 ended in about 0.81 seconds with error 00:22:15.466 Verification LBA range: start 0x0 length 0x400 00:22:15.466 Nvme2n1 : 0.81 242.71 15.17 78.85 0.00 193053.20 16227.96 198730.12 00:22:15.466 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.466 Job: Nvme3n1 ended in about 0.81 seconds with error 00:22:15.466 Verification LBA range: start 0x0 length 0x400 00:22:15.466 Nvme3n1 : 0.81 236.19 14.76 78.73 0.00 193275.61 15853.47 211712.49 00:22:15.466 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.466 Job: Nvme4n1 ended in about 0.81 seconds with error 00:22:15.466 Verification LBA range: start 0x0 length 0x400 00:22:15.466 Nvme4n1 : 0.81 170.33 10.65 78.99 0.00 239258.93 14168.26 220700.28 00:22:15.466 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.466 Job: Nvme5n1 ended in about 0.82 seconds with error 00:22:15.466 Verification LBA range: start 0x0 length 0x400 00:22:15.466 Nvme5n1 : 0.82 234.20 14.64 78.07 0.00 187293.26 16227.96 218702.99 00:22:15.466 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.466 Job: Nvme6n1 ended in about 0.82 seconds with error 00:22:15.466 Verification LBA range: start 0x0 length 0x400 00:22:15.466 Nvme6n1 : 0.82 155.76 9.73 77.88 0.00 245266.53 17101.78 221698.93 00:22:15.466 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.466 Job: Nvme7n1 ended in about 0.79 seconds with error 00:22:15.466 Verification LBA range: start 0x0 length 0x400 00:22:15.466 Nvme7n1 : 0.79 244.01 15.25 81.34 0.00 171302.58 3620.08 219701.64 00:22:15.466 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.466 Job: Nvme8n1 ended in about 0.80 seconds with error 00:22:15.466 Verification LBA range: start 0x0 length 0x400 00:22:15.466 Nvme8n1 : 0.80 245.38 15.34 79.72 0.00 168104.20 12108.56 212711.13 00:22:15.466 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.466 Job: Nvme9n1 ended in about 0.83 seconds with error 00:22:15.466 Verification LBA range: start 0x0 length 0x400 00:22:15.466 Nvme9n1 : 0.83 154.73 9.67 77.36 0.00 231569.96 31706.94 229688.08 00:22:15.466 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.466 Job: Nvme10n1 ended in about 0.82 seconds with error 00:22:15.466 Verification LBA range: start 0x0 length 0x400 00:22:15.466 Nvme10n1 : 0.82 155.39 9.71 77.69 0.00 225293.00 18474.91 233682.65 00:22:15.466 [2024-10-08T16:30:08.789Z] =================================================================================================================== 00:22:15.466 [2024-10-08T16:30:08.789Z] Total : 2008.89 125.56 786.99 0.00 207128.25 3620.08 233682.65 00:22:15.466 [2024-10-08 18:30:08.583340] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:15.466 [2024-10-08 18:30:08.583395] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:15.466 [2024-10-08 18:30:08.583650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.466 [2024-10-08 18:30:08.583668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca5770 with addr=10.0.0.2, port=4420 00:22:15.466 [2024-10-08 18:30:08.583679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca5770 is same with the state(6) to be set 00:22:15.466 [2024-10-08 18:30:08.583693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76d610 (9): Bad file descriptor 00:22:15.466 [2024-10-08 18:30:08.583706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x858270 (9): Bad file descriptor 00:22:15.466 [2024-10-08 18:30:08.583715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7d530 (9): Bad file descriptor 00:22:15.466 [2024-10-08 18:30:08.583723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc793c0 (9): Bad file descriptor 00:22:15.467 [2024-10-08 18:30:08.583899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.467 [2024-10-08 18:30:08.583913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc7c910 with addr=10.0.0.2, port=4420 00:22:15.467 [2024-10-08 18:30:08.583921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7c910 is same with the state(6) to be set 00:22:15.467 [2024-10-08 18:30:08.584024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.467 [2024-10-08 18:30:08.584035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca6350 with addr=10.0.0.2, port=4420 00:22:15.467 [2024-10-08 18:30:08.584049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6350 is same with the state(6) to be set 00:22:15.467 [2024-10-08 18:30:08.584058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca5770 (9): Bad file descriptor 00:22:15.467 [2024-10-08 18:30:08.584067] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:15.467 [2024-10-08 18:30:08.584074] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:15.467 [2024-10-08 18:30:08.584082] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:15.467 [2024-10-08 18:30:08.584097] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:15.467 [2024-10-08 18:30:08.584104] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:15.467 [2024-10-08 18:30:08.584110] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:15.467 [2024-10-08 18:30:08.584120] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:15.467 [2024-10-08 18:30:08.584126] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:15.467 [2024-10-08 18:30:08.584133] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:15.467 [2024-10-08 18:30:08.584142] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:15.467 [2024-10-08 18:30:08.584149] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:15.467 [2024-10-08 18:30:08.584155] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:15.467 [2024-10-08 18:30:08.584202] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.467 [2024-10-08 18:30:08.584214] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.467 [2024-10-08 18:30:08.584223] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.467 [2024-10-08 18:30:08.584233] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.467 [2024-10-08 18:30:08.584243] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.467 [2024-10-08 18:30:08.584526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.467 [2024-10-08 18:30:08.584536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.467 [2024-10-08 18:30:08.584542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.467 [2024-10-08 18:30:08.584547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.467 [2024-10-08 18:30:08.584563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7c910 (9): Bad file descriptor 00:22:15.467 [2024-10-08 18:30:08.584573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6350 (9): Bad file descriptor 00:22:15.467 [2024-10-08 18:30:08.584580] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:15.467 [2024-10-08 18:30:08.584586] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:15.467 [2024-10-08 18:30:08.584592] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:15.467 [2024-10-08 18:30:08.584626] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:15.467 [2024-10-08 18:30:08.584637] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:15.467 [2024-10-08 18:30:08.584649] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:15.467 [2024-10-08 18:30:08.584657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.467 [2024-10-08 18:30:08.584679] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:15.467 [2024-10-08 18:30:08.584686] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:15.467 [2024-10-08 18:30:08.584693] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:15.467 [2024-10-08 18:30:08.584702] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:15.467 [2024-10-08 18:30:08.584708] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:15.467 [2024-10-08 18:30:08.584715] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:15.467 [2024-10-08 18:30:08.584743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.467 [2024-10-08 18:30:08.584751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.467 [2024-10-08 18:30:08.584913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.467 [2024-10-08 18:30:08.584924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x854650 with addr=10.0.0.2, port=4420 00:22:15.467 [2024-10-08 18:30:08.584932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x854650 is same with the state(6) to be set 00:22:15.467 [2024-10-08 18:30:08.585024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.467 [2024-10-08 18:30:08.585034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x84d6d0 with addr=10.0.0.2, port=4420 00:22:15.467 [2024-10-08 18:30:08.585042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x84d6d0 is same with the state(6) to be set 00:22:15.467 [2024-10-08 18:30:08.585127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.467 [2024-10-08 18:30:08.585137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x857e10 with addr=10.0.0.2, port=4420 00:22:15.467 [2024-10-08 18:30:08.585144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x857e10 is same with the state(6) to be set 00:22:15.467 [2024-10-08 18:30:08.585169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x854650 (9): Bad file descriptor 00:22:15.467 [2024-10-08 18:30:08.585179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x84d6d0 (9): Bad file descriptor 00:22:15.467 [2024-10-08 18:30:08.585188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x857e10 (9): Bad file descriptor 00:22:15.467 [2024-10-08 18:30:08.585211] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:15.467 [2024-10-08 18:30:08.585217] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:15.467 [2024-10-08 18:30:08.585224] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:15.467 [2024-10-08 18:30:08.585232] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:15.467 [2024-10-08 18:30:08.585238] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:15.467 [2024-10-08 18:30:08.585245] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:15.467 [2024-10-08 18:30:08.585253] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:15.467 [2024-10-08 18:30:08.585259] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:15.467 [2024-10-08 18:30:08.585269] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:15.467 [2024-10-08 18:30:08.585294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.467 [2024-10-08 18:30:08.585301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.467 [2024-10-08 18:30:08.585307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.726 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 484220 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 484220 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 484220 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:16.663 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:16.663 rmmod nvme_tcp 00:22:16.922 rmmod nvme_fabrics 00:22:16.922 rmmod nvme_keyring 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 483910 ']' 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 483910 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 483910 ']' 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 483910 00:22:16.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (483910) - No such process 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 483910 is not found' 00:22:16.922 Process with pid 483910 is not found 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.922 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.825 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:18.825 00:22:18.825 real 0m8.173s 00:22:18.825 user 0m20.809s 00:22:18.825 sys 0m1.358s 00:22:18.825 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:18.825 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.825 ************************************ 00:22:18.825 END TEST nvmf_shutdown_tc3 00:22:18.825 ************************************ 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:19.084 ************************************ 00:22:19.084 START TEST nvmf_shutdown_tc4 00:22:19.084 ************************************ 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.084 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:19.085 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:19.085 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:19.085 Found net devices under 0000:86:00.0: cvl_0_0 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:19.085 Found net devices under 0000:86:00.1: cvl_0_1 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:19.085 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:19.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:22:19.345 00:22:19.345 --- 10.0.0.2 ping statistics --- 00:22:19.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.345 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:22:19.345 00:22:19.345 --- 10.0.0.1 ping statistics --- 00:22:19.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.345 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=485705 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 485705 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 485705 ']' 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.345 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:19.345 [2024-10-08 18:30:12.561411] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:22:19.345 [2024-10-08 18:30:12.561455] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.345 [2024-10-08 18:30:12.633009] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.604 [2024-10-08 18:30:12.714348] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.604 [2024-10-08 18:30:12.714389] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.604 [2024-10-08 18:30:12.714397] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.604 [2024-10-08 18:30:12.714403] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.604 [2024-10-08 18:30:12.714408] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.604 [2024-10-08 18:30:12.716025] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.604 [2024-10-08 18:30:12.716125] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.604 [2024-10-08 18:30:12.716231] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.604 [2024-10-08 18:30:12.716232] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:20.171 [2024-10-08 18:30:13.432274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.171 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:20.430 Malloc1 00:22:20.430 [2024-10-08 18:30:13.531867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.430 Malloc2 00:22:20.430 Malloc3 00:22:20.430 Malloc4 00:22:20.430 Malloc5 00:22:20.430 Malloc6 00:22:20.689 Malloc7 00:22:20.689 Malloc8 00:22:20.689 Malloc9 00:22:20.689 Malloc10 00:22:20.689 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.689 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:20.689 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:20.689 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:20.689 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=485989 00:22:20.689 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:20.689 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:20.947 [2024-10-08 18:30:14.027567] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:26.230 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:26.230 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 485705 00:22:26.230 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 485705 ']' 00:22:26.230 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 485705 00:22:26.230 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:22:26.230 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.230 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 485705 00:22:26.230 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:26.230 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:26.230 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 485705' 00:22:26.230 killing process with pid 485705 00:22:26.230 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 485705 00:22:26.230 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 485705 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 [2024-10-08 18:30:19.031857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:26.230 [2024-10-08 18:30:19.032006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78770 is same with the state(6) to be set 00:22:26.230 [2024-10-08 18:30:19.032052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78770 is same with the state(6) to be set 00:22:26.230 [2024-10-08 18:30:19.032060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78770 is same with the state(6) to be set 00:22:26.230 starting I/O failed: -6 00:22:26.230 starting I/O failed: -6 00:22:26.230 starting I/O failed: -6 00:22:26.230 starting I/O failed: -6 00:22:26.230 starting I/O failed: -6 00:22:26.230 [2024-10-08 18:30:19.032726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc778e0 is same with the state(6) to be set 00:22:26.230 [2024-10-08 18:30:19.032751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc778e0 is same with the state(6) to be set 00:22:26.230 [2024-10-08 18:30:19.032760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc778e0 is same with the state(6) to be set 00:22:26.230 [2024-10-08 18:30:19.032767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc778e0 is same with the state(6) to be set 00:22:26.230 [2024-10-08 18:30:19.032773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc778e0 is same with the state(6) to be set 00:22:26.230 [2024-10-08 18:30:19.032780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc778e0 is same with the state(6) to be set 00:22:26.230 [2024-10-08 18:30:19.032789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc778e0 is same with the state(6) to be set 00:22:26.230 [2024-10-08 18:30:19.032795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc778e0 is same with the state(6) to be set 00:22:26.230 starting I/O failed: -6 00:22:26.230 [2024-10-08 18:30:19.032801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc778e0 is same with the state(6) to be set 00:22:26.230 [2024-10-08 18:30:19.032808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc778e0 is same with the state(6) to be set 00:22:26.230 [2024-10-08 18:30:19.032814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc778e0 is same with the state(6) to be set 00:22:26.230 [2024-10-08 18:30:19.032819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc778e0 is same with the state(6) to be set 00:22:26.230 [2024-10-08 18:30:19.032826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc778e0 is same with the state(6) to be set 00:22:26.230 [2024-10-08 18:30:19.032832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc778e0 is same with the state(6) to be set 00:22:26.230 [2024-10-08 18:30:19.032838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc778e0 is same with the state(6) to be set 00:22:26.230 [2024-10-08 18:30:19.032845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc778e0 is same with the state(6) to be set 00:22:26.230 [2024-10-08 18:30:19.032851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc778e0 is same with the state(6) to be set 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 starting I/O failed: -6 00:22:26.230 Write completed with error (sct=0, sc=8) 00:22:26.230 [2024-10-08 18:30:19.033297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec030 is same with tWrite completed with error (sct=0, sc=8) 00:22:26.230 he state(6) to be set 00:22:26.230 starting I/O failed: -6 00:22:26.230 [2024-10-08 18:30:19.033328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec030 is same with the state(6) to be set 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 [2024-10-08 18:30:19.033337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec030 is same with the state(6) to be set 00:22:26.231 [2024-10-08 18:30:19.033344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec030 is same with the state(6) to be set 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 [2024-10-08 18:30:19.033350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec030 is same with the state(6) to be set 00:22:26.231 starting I/O failed: -6 00:22:26.231 [2024-10-08 18:30:19.033357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec030 is same with the state(6) to be set 00:22:26.231 [2024-10-08 18:30:19.033364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec030 is same with the state(6) to be set 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 [2024-10-08 18:30:19.033370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec030 is same with the state(6) to be set 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 [2024-10-08 18:30:19.033485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 [2024-10-08 18:30:19.033658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec500 is same with the state(6) to be set 00:22:26.231 [2024-10-08 18:30:19.033684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec500 is same with the state(6) to be set 00:22:26.231 [2024-10-08 18:30:19.033692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec500 is same with the state(6) to be set 00:22:26.231 [2024-10-08 18:30:19.033698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec500 is same with the state(6) to be set 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 [2024-10-08 18:30:19.033705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec500 is same with the state(6) to be set 00:22:26.231 starting I/O failed: -6 00:22:26.231 [2024-10-08 18:30:19.033711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec500 is same with the state(6) to be set 00:22:26.231 [2024-10-08 18:30:19.033718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec500 is same with the state(6) to be set 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 [2024-10-08 18:30:19.033724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec500 is same with the state(6) to be set 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 [2024-10-08 18:30:19.034022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec9d0 is same with the state(6) to be set 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 [2024-10-08 18:30:19.034040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec9d0 is same with the state(6) to be set 00:22:26.231 starting I/O failed: -6 00:22:26.231 [2024-10-08 18:30:19.034048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec9d0 is same with the state(6) to be set 00:22:26.231 [2024-10-08 18:30:19.034055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec9d0 is same with the state(6) to be set 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 [2024-10-08 18:30:19.034060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec9d0 is same with the state(6) to be set 00:22:26.231 starting I/O failed: -6 00:22:26.231 [2024-10-08 18:30:19.034067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec9d0 is same with the state(6) to be set 00:22:26.231 [2024-10-08 18:30:19.034073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec9d0 is same with the state(6) to be set 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 [2024-10-08 18:30:19.034772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeebb60 is same with tWrite completed with error (sct=0, sc=8) 00:22:26.231 he state(6) to be set 00:22:26.231 starting I/O failed: -6 00:22:26.231 [2024-10-08 18:30:19.034788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeebb60 is same with the state(6) to be set 00:22:26.231 [2024-10-08 18:30:19.034799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeebb60 is same with the state(6) to be set 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 [2024-10-08 18:30:19.034805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeebb60 is same with tstarting I/O failed: -6 00:22:26.231 he state(6) to be set 00:22:26.231 [2024-10-08 18:30:19.034813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeebb60 is same with the state(6) to be set 00:22:26.231 [2024-10-08 18:30:19.034819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeebb60 is same with the state(6) to be set 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.231 Write completed with error (sct=0, sc=8) 00:22:26.231 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 [2024-10-08 18:30:19.035818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.232 NVMe io qpair process completion error 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 [2024-10-08 18:30:19.039173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 [2024-10-08 18:30:19.040062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 [2024-10-08 18:30:19.040222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe583f0 is same with tWrite completed with error (sct=0, sc=8) 00:22:26.232 he state(6) to be set 00:22:26.232 starting I/O failed: -6 00:22:26.232 [2024-10-08 18:30:19.040246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe583f0 is same with the state(6) to be set 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 [2024-10-08 18:30:19.040258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe583f0 is same with the state(6) to be set 00:22:26.232 starting I/O failed: -6 00:22:26.232 [2024-10-08 18:30:19.040265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe583f0 is same with the state(6) to be set 00:22:26.232 [2024-10-08 18:30:19.040271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe583f0 is same with the state(6) to be set 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 [2024-10-08 18:30:19.040277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe583f0 is same with the state(6) to be set 00:22:26.232 [2024-10-08 18:30:19.040283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe583f0 is same with the state(6) to be set 00:22:26.232 [2024-10-08 18:30:19.040289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe583f0 is same with tWrite completed with error (sct=0, sc=8) 00:22:26.232 he state(6) to be set 00:22:26.232 starting I/O failed: -6 00:22:26.232 [2024-10-08 18:30:19.040296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe583f0 is same with the state(6) to be set 00:22:26.232 [2024-10-08 18:30:19.040303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe583f0 is same with the state(6) to be set 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.232 starting I/O failed: -6 00:22:26.232 Write completed with error (sct=0, sc=8) 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 [2024-10-08 18:30:19.040568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c60 is same with the state(6) to be set 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 [2024-10-08 18:30:19.040588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c60 is same with the state(6) to be set 00:22:26.233 starting I/O failed: -6 00:22:26.233 [2024-10-08 18:30:19.040596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c60 is same with the state(6) to be set 00:22:26.233 [2024-10-08 18:30:19.040602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c60 is same with the state(6) to be set 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 [2024-10-08 18:30:19.040608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c60 is same with the state(6) to be set 00:22:26.233 starting I/O failed: -6 00:22:26.233 [2024-10-08 18:30:19.040615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c60 is same with the state(6) to be set 00:22:26.233 [2024-10-08 18:30:19.040621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c60 is same with the state(6) to be set 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 [2024-10-08 18:30:19.040627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c60 is same with the state(6) to be set 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 [2024-10-08 18:30:19.040910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79150 is same with tWrite completed with error (sct=0, sc=8) 00:22:26.233 he state(6) to be set 00:22:26.233 starting I/O failed: -6 00:22:26.233 [2024-10-08 18:30:19.040931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79150 is same with the state(6) to be set 00:22:26.233 [2024-10-08 18:30:19.040939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79150 is same with the state(6) to be set 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 [2024-10-08 18:30:19.040945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79150 is same with the state(6) to be set 00:22:26.233 [2024-10-08 18:30:19.040952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79150 is same with the state(6) to be set 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 [2024-10-08 18:30:19.040958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79150 is same with the state(6) to be set 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 [2024-10-08 18:30:19.041054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 [2024-10-08 18:30:19.041337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe57f20 is same with tstarting I/O failed: -6 00:22:26.233 he state(6) to be set 00:22:26.233 [2024-10-08 18:30:19.041358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe57f20 is same with the state(6) to be set 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 [2024-10-08 18:30:19.041365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe57f20 is same with tstarting I/O failed: -6 00:22:26.233 he state(6) to be set 00:22:26.233 [2024-10-08 18:30:19.041373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe57f20 is same with the state(6) to be set 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 [2024-10-08 18:30:19.041383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe57f20 is same with the state(6) to be set 00:22:26.233 starting I/O failed: -6 00:22:26.233 [2024-10-08 18:30:19.041391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe57f20 is same with the state(6) to be set 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.233 Write completed with error (sct=0, sc=8) 00:22:26.233 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 [2024-10-08 18:30:19.042567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.234 NVMe io qpair process completion error 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 [2024-10-08 18:30:19.043564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 [2024-10-08 18:30:19.044363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 Write completed with error (sct=0, sc=8) 00:22:26.234 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 [2024-10-08 18:30:19.045382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 [2024-10-08 18:30:19.046910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.235 NVMe io qpair process completion error 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 [2024-10-08 18:30:19.047781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.235 starting I/O failed: -6 00:22:26.235 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 [2024-10-08 18:30:19.048636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 [2024-10-08 18:30:19.049658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.236 Write completed with error (sct=0, sc=8) 00:22:26.236 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 [2024-10-08 18:30:19.051731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.237 NVMe io qpair process completion error 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 [2024-10-08 18:30:19.052695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 [2024-10-08 18:30:19.053584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.237 Write completed with error (sct=0, sc=8) 00:22:26.237 starting I/O failed: -6 00:22:26.238 [2024-10-08 18:30:19.054574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 [2024-10-08 18:30:19.056538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.238 NVMe io qpair process completion error 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 [2024-10-08 18:30:19.057443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 Write completed with error (sct=0, sc=8) 00:22:26.238 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 [2024-10-08 18:30:19.058314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:26.239 starting I/O failed: -6 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 [2024-10-08 18:30:19.059469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.239 starting I/O failed: -6 00:22:26.239 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 [2024-10-08 18:30:19.062977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.240 NVMe io qpair process completion error 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 [2024-10-08 18:30:19.064115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 [2024-10-08 18:30:19.064920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 Write completed with error (sct=0, sc=8) 00:22:26.240 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 [2024-10-08 18:30:19.065933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 [2024-10-08 18:30:19.068667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.241 NVMe io qpair process completion error 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 [2024-10-08 18:30:19.069688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 starting I/O failed: -6 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.241 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 [2024-10-08 18:30:19.070559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 [2024-10-08 18:30:19.071570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.242 Write completed with error (sct=0, sc=8) 00:22:26.242 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 [2024-10-08 18:30:19.073336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.243 NVMe io qpair process completion error 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 [2024-10-08 18:30:19.074365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 [2024-10-08 18:30:19.075266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 [2024-10-08 18:30:19.076231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.243 starting I/O failed: -6 00:22:26.243 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 [2024-10-08 18:30:19.081723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:26.244 NVMe io qpair process completion error 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 [2024-10-08 18:30:19.082751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 Write completed with error (sct=0, sc=8) 00:22:26.244 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 [2024-10-08 18:30:19.083632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 [2024-10-08 18:30:19.084613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.245 starting I/O failed: -6 00:22:26.245 Write completed with error (sct=0, sc=8) 00:22:26.246 starting I/O failed: -6 00:22:26.246 Write completed with error (sct=0, sc=8) 00:22:26.246 starting I/O failed: -6 00:22:26.246 Write completed with error (sct=0, sc=8) 00:22:26.246 starting I/O failed: -6 00:22:26.246 Write completed with error (sct=0, sc=8) 00:22:26.246 starting I/O failed: -6 00:22:26.246 Write completed with error (sct=0, sc=8) 00:22:26.246 starting I/O failed: -6 00:22:26.246 Write completed with error (sct=0, sc=8) 00:22:26.246 starting I/O failed: -6 00:22:26.246 Write completed with error (sct=0, sc=8) 00:22:26.246 starting I/O failed: -6 00:22:26.246 Write completed with error (sct=0, sc=8) 00:22:26.246 starting I/O failed: -6 00:22:26.246 Write completed with error (sct=0, sc=8) 00:22:26.246 starting I/O failed: -6 00:22:26.246 Write completed with error (sct=0, sc=8) 00:22:26.246 starting I/O failed: -6 00:22:26.246 Write completed with error (sct=0, sc=8) 00:22:26.246 starting I/O failed: -6 00:22:26.246 Write completed with error (sct=0, sc=8) 00:22:26.246 starting I/O failed: -6 00:22:26.246 Write completed with error (sct=0, sc=8) 00:22:26.246 starting I/O failed: -6 00:22:26.246 Write completed with error (sct=0, sc=8) 00:22:26.246 starting I/O failed: -6 00:22:26.246 Write completed with error (sct=0, sc=8) 00:22:26.246 starting I/O failed: -6 00:22:26.246 [2024-10-08 18:30:19.089816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.246 NVMe io qpair process completion error 00:22:26.246 Initializing NVMe Controllers 00:22:26.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:26.246 Controller IO queue size 128, less than required. 00:22:26.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:26.246 Controller IO queue size 128, less than required. 00:22:26.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:26.246 Controller IO queue size 128, less than required. 00:22:26.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:26.246 Controller IO queue size 128, less than required. 00:22:26.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:26.246 Controller IO queue size 128, less than required. 00:22:26.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:26.246 Controller IO queue size 128, less than required. 00:22:26.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:26.246 Controller IO queue size 128, less than required. 00:22:26.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:26.246 Controller IO queue size 128, less than required. 00:22:26.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:26.246 Controller IO queue size 128, less than required. 00:22:26.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:26.246 Controller IO queue size 128, less than required. 00:22:26.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:26.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:26.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:26.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:26.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:26.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:26.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:26.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:26.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:26.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:26.246 Initialization complete. Launching workers. 00:22:26.246 ======================================================== 00:22:26.246 Latency(us) 00:22:26.246 Device Information : IOPS MiB/s Average min max 00:22:26.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2170.90 93.28 58966.59 908.96 113466.94 00:22:26.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2193.82 94.27 57787.73 531.66 112170.96 00:22:26.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2138.01 91.87 59888.80 713.11 117219.31 00:22:26.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2173.87 93.41 58312.02 800.43 107278.20 00:22:26.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2202.10 94.62 57573.33 801.65 108010.60 00:22:26.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2186.39 93.95 57995.57 847.47 107255.41 00:22:26.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2198.70 94.48 57685.38 936.18 105669.40 00:22:26.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2208.89 94.91 57435.24 803.29 104433.12 00:22:26.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2213.34 95.10 57356.10 550.66 105045.78 00:22:26.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2206.76 94.82 57549.13 818.95 111021.90 00:22:26.246 ======================================================== 00:22:26.246 Total : 21892.79 940.71 58047.53 531.66 117219.31 00:22:26.246 00:22:26.246 [2024-10-08 18:30:19.092782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161a190 is same with the state(6) to be set 00:22:26.246 [2024-10-08 18:30:19.092826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16157f0 is same with the state(6) to be set 00:22:26.246 [2024-10-08 18:30:19.092866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1613c90 is same with the state(6) to be set 00:22:26.246 [2024-10-08 18:30:19.092895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1615bb0 is same with the state(6) to be set 00:22:26.246 [2024-10-08 18:30:19.092924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1613630 is same with the state(6) to be set 00:22:26.246 [2024-10-08 18:30:19.092952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1613960 is same with the state(6) to be set 00:22:26.246 [2024-10-08 18:30:19.092980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161a4c0 is same with the state(6) to be set 00:22:26.246 [2024-10-08 18:30:19.093008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1613fc0 is same with the state(6) to be set 00:22:26.246 [2024-10-08 18:30:19.093036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1619e60 is same with the state(6) to be set 00:22:26.246 [2024-10-08 18:30:19.093065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16159d0 is same with the state(6) to be set 00:22:26.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:26.246 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 485989 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 485989 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 485989 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:27.184 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:27.184 rmmod nvme_tcp 00:22:27.184 rmmod nvme_fabrics 00:22:27.184 rmmod nvme_keyring 00:22:27.442 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 485705 ']' 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 485705 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 485705 ']' 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 485705 00:22:27.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (485705) - No such process 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 485705 is not found' 00:22:27.443 Process with pid 485705 is not found 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.443 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.348 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:29.348 00:22:29.348 real 0m10.403s 00:22:29.348 user 0m27.475s 00:22:29.348 sys 0m5.101s 00:22:29.348 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:29.348 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:29.348 ************************************ 00:22:29.348 END TEST nvmf_shutdown_tc4 00:22:29.348 ************************************ 00:22:29.348 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:29.348 00:22:29.348 real 0m42.826s 00:22:29.348 user 1m48.077s 00:22:29.348 sys 0m14.041s 00:22:29.348 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:29.348 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:29.348 ************************************ 00:22:29.348 END TEST nvmf_shutdown 00:22:29.348 ************************************ 00:22:29.607 18:30:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:22:29.607 00:22:29.607 real 12m8.509s 00:22:29.607 user 26m32.138s 00:22:29.607 sys 3m34.915s 00:22:29.607 18:30:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:29.607 18:30:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:29.607 ************************************ 00:22:29.607 END TEST nvmf_target_extra 00:22:29.607 ************************************ 00:22:29.607 18:30:22 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:29.607 18:30:22 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:29.607 18:30:22 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:29.607 18:30:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:29.607 ************************************ 00:22:29.607 START TEST nvmf_host 00:22:29.607 ************************************ 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:29.607 * Looking for test storage... 00:22:29.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:29.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.607 --rc genhtml_branch_coverage=1 00:22:29.607 --rc genhtml_function_coverage=1 00:22:29.607 --rc genhtml_legend=1 00:22:29.607 --rc geninfo_all_blocks=1 00:22:29.607 --rc geninfo_unexecuted_blocks=1 00:22:29.607 00:22:29.607 ' 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:29.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.607 --rc genhtml_branch_coverage=1 00:22:29.607 --rc genhtml_function_coverage=1 00:22:29.607 --rc genhtml_legend=1 00:22:29.607 --rc geninfo_all_blocks=1 00:22:29.607 --rc geninfo_unexecuted_blocks=1 00:22:29.607 00:22:29.607 ' 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:29.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.607 --rc genhtml_branch_coverage=1 00:22:29.607 --rc genhtml_function_coverage=1 00:22:29.607 --rc genhtml_legend=1 00:22:29.607 --rc geninfo_all_blocks=1 00:22:29.607 --rc geninfo_unexecuted_blocks=1 00:22:29.607 00:22:29.607 ' 00:22:29.607 18:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:29.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.607 --rc genhtml_branch_coverage=1 00:22:29.607 --rc genhtml_function_coverage=1 00:22:29.607 --rc genhtml_legend=1 00:22:29.607 --rc geninfo_all_blocks=1 00:22:29.607 --rc geninfo_unexecuted_blocks=1 00:22:29.608 00:22:29.608 ' 00:22:29.608 18:30:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:29.608 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:29.608 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.608 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.608 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.608 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.608 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:29.608 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:29.608 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.608 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:29.608 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.608 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:29.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:29.867 18:30:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:29.868 18:30:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:29.868 18:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:29.868 18:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:29.868 18:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.868 ************************************ 00:22:29.868 START TEST nvmf_multicontroller 00:22:29.868 ************************************ 00:22:29.868 18:30:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:29.868 * Looking for test storage... 00:22:29.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:29.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.868 --rc genhtml_branch_coverage=1 00:22:29.868 --rc genhtml_function_coverage=1 00:22:29.868 --rc genhtml_legend=1 00:22:29.868 --rc geninfo_all_blocks=1 00:22:29.868 --rc geninfo_unexecuted_blocks=1 00:22:29.868 00:22:29.868 ' 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:29.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.868 --rc genhtml_branch_coverage=1 00:22:29.868 --rc genhtml_function_coverage=1 00:22:29.868 --rc genhtml_legend=1 00:22:29.868 --rc geninfo_all_blocks=1 00:22:29.868 --rc geninfo_unexecuted_blocks=1 00:22:29.868 00:22:29.868 ' 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:29.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.868 --rc genhtml_branch_coverage=1 00:22:29.868 --rc genhtml_function_coverage=1 00:22:29.868 --rc genhtml_legend=1 00:22:29.868 --rc geninfo_all_blocks=1 00:22:29.868 --rc geninfo_unexecuted_blocks=1 00:22:29.868 00:22:29.868 ' 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:29.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.868 --rc genhtml_branch_coverage=1 00:22:29.868 --rc genhtml_function_coverage=1 00:22:29.868 --rc genhtml_legend=1 00:22:29.868 --rc geninfo_all_blocks=1 00:22:29.868 --rc geninfo_unexecuted_blocks=1 00:22:29.868 00:22:29.868 ' 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:29.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:29.868 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:29.869 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:29.869 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:29.869 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:29.869 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:29.869 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:29.869 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.869 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:29.869 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:29.869 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:29.869 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.869 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.869 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.128 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:30.128 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:30.128 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:30.128 18:30:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:36.699 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:36.699 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:36.699 Found net devices under 0000:86:00.0: cvl_0_0 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:36.699 Found net devices under 0000:86:00.1: cvl_0_1 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:36.699 18:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:36.699 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:36.699 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:36.699 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:36.699 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:36.699 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:36.699 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:36.699 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:36.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:22:36.699 00:22:36.699 --- 10.0.0.2 ping statistics --- 00:22:36.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.699 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:22:36.699 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:36.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:22:36.699 00:22:36.699 --- 10.0.0.1 ping statistics --- 00:22:36.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.699 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:22:36.699 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=490719 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 490719 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 490719 ']' 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:36.700 18:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:36.700 [2024-10-08 18:30:29.221819] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:22:36.700 [2024-10-08 18:30:29.221864] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.700 [2024-10-08 18:30:29.292640] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:36.700 [2024-10-08 18:30:29.363962] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.700 [2024-10-08 18:30:29.364002] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.700 [2024-10-08 18:30:29.364009] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.700 [2024-10-08 18:30:29.364016] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.700 [2024-10-08 18:30:29.364021] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.700 [2024-10-08 18:30:29.364996] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.700 [2024-10-08 18:30:29.365105] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.700 [2024-10-08 18:30:29.365106] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.038 [2024-10-08 18:30:30.091216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.038 Malloc0 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.038 [2024-10-08 18:30:30.149706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.038 [2024-10-08 18:30:30.157640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.038 Malloc1 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=490908 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 490908 /var/tmp/bdevperf.sock 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 490908 ']' 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:37.038 18:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.032 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:38.032 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.033 NVMe0n1 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.033 1 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.033 request: 00:22:38.033 { 00:22:38.033 "name": "NVMe0", 00:22:38.033 "trtype": "tcp", 00:22:38.033 "traddr": "10.0.0.2", 00:22:38.033 "adrfam": "ipv4", 00:22:38.033 "trsvcid": "4420", 00:22:38.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.033 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:38.033 "hostaddr": "10.0.0.1", 00:22:38.033 "prchk_reftag": false, 00:22:38.033 "prchk_guard": false, 00:22:38.033 "hdgst": false, 00:22:38.033 "ddgst": false, 00:22:38.033 "allow_unrecognized_csi": false, 00:22:38.033 "method": "bdev_nvme_attach_controller", 00:22:38.033 "req_id": 1 00:22:38.033 } 00:22:38.033 Got JSON-RPC error response 00:22:38.033 response: 00:22:38.033 { 00:22:38.033 "code": -114, 00:22:38.033 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:38.033 } 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.033 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.292 request: 00:22:38.292 { 00:22:38.292 "name": "NVMe0", 00:22:38.292 "trtype": "tcp", 00:22:38.292 "traddr": "10.0.0.2", 00:22:38.292 "adrfam": "ipv4", 00:22:38.292 "trsvcid": "4420", 00:22:38.292 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:38.293 "hostaddr": "10.0.0.1", 00:22:38.293 "prchk_reftag": false, 00:22:38.293 "prchk_guard": false, 00:22:38.293 "hdgst": false, 00:22:38.293 "ddgst": false, 00:22:38.293 "allow_unrecognized_csi": false, 00:22:38.293 "method": "bdev_nvme_attach_controller", 00:22:38.293 "req_id": 1 00:22:38.293 } 00:22:38.293 Got JSON-RPC error response 00:22:38.293 response: 00:22:38.293 { 00:22:38.293 "code": -114, 00:22:38.293 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:38.293 } 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.293 request: 00:22:38.293 { 00:22:38.293 "name": "NVMe0", 00:22:38.293 "trtype": "tcp", 00:22:38.293 "traddr": "10.0.0.2", 00:22:38.293 "adrfam": "ipv4", 00:22:38.293 "trsvcid": "4420", 00:22:38.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.293 "hostaddr": "10.0.0.1", 00:22:38.293 "prchk_reftag": false, 00:22:38.293 "prchk_guard": false, 00:22:38.293 "hdgst": false, 00:22:38.293 "ddgst": false, 00:22:38.293 "multipath": "disable", 00:22:38.293 "allow_unrecognized_csi": false, 00:22:38.293 "method": "bdev_nvme_attach_controller", 00:22:38.293 "req_id": 1 00:22:38.293 } 00:22:38.293 Got JSON-RPC error response 00:22:38.293 response: 00:22:38.293 { 00:22:38.293 "code": -114, 00:22:38.293 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:38.293 } 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.293 request: 00:22:38.293 { 00:22:38.293 "name": "NVMe0", 00:22:38.293 "trtype": "tcp", 00:22:38.293 "traddr": "10.0.0.2", 00:22:38.293 "adrfam": "ipv4", 00:22:38.293 "trsvcid": "4420", 00:22:38.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.293 "hostaddr": "10.0.0.1", 00:22:38.293 "prchk_reftag": false, 00:22:38.293 "prchk_guard": false, 00:22:38.293 "hdgst": false, 00:22:38.293 "ddgst": false, 00:22:38.293 "multipath": "failover", 00:22:38.293 "allow_unrecognized_csi": false, 00:22:38.293 "method": "bdev_nvme_attach_controller", 00:22:38.293 "req_id": 1 00:22:38.293 } 00:22:38.293 Got JSON-RPC error response 00:22:38.293 response: 00:22:38.293 { 00:22:38.293 "code": -114, 00:22:38.293 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:38.293 } 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.293 NVMe0n1 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.293 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.293 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.552 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.552 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:38.552 18:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:39.489 { 00:22:39.489 "results": [ 00:22:39.489 { 00:22:39.489 "job": "NVMe0n1", 00:22:39.489 "core_mask": "0x1", 00:22:39.489 "workload": "write", 00:22:39.489 "status": "finished", 00:22:39.489 "queue_depth": 128, 00:22:39.489 "io_size": 4096, 00:22:39.489 "runtime": 1.00461, 00:22:39.489 "iops": 24844.467007097282, 00:22:39.489 "mibps": 97.04869924647376, 00:22:39.489 "io_failed": 0, 00:22:39.489 "io_timeout": 0, 00:22:39.489 "avg_latency_us": 5145.64736697708, 00:22:39.489 "min_latency_us": 3183.177142857143, 00:22:39.489 "max_latency_us": 9050.209523809524 00:22:39.489 } 00:22:39.490 ], 00:22:39.490 "core_count": 1 00:22:39.490 } 00:22:39.490 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:39.490 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.490 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:39.490 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.490 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:39.490 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 490908 00:22:39.490 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 490908 ']' 00:22:39.490 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 490908 00:22:39.490 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:39.490 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:39.490 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 490908 00:22:39.749 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:39.749 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:39.749 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 490908' 00:22:39.749 killing process with pid 490908 00:22:39.749 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 490908 00:22:39.749 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 490908 00:22:39.749 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:39.749 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.749 18:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:39.749 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.749 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:39.749 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.749 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:39.749 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.749 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:39.749 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:39.749 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:39.749 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:39.749 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:22:39.749 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:22:39.749 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:39.749 [2024-10-08 18:30:30.262239] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:22:39.749 [2024-10-08 18:30:30.262286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490908 ] 00:22:39.749 [2024-10-08 18:30:30.329073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.749 [2024-10-08 18:30:30.402797] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.749 [2024-10-08 18:30:31.594827] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 7d1d5c56-e3e4-41c3-8c8b-69262993a778 already exists 00:22:39.749 [2024-10-08 18:30:31.594854] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:7d1d5c56-e3e4-41c3-8c8b-69262993a778 alias for bdev NVMe1n1 00:22:39.749 [2024-10-08 18:30:31.594862] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:39.749 Running I/O for 1 seconds... 00:22:39.749 24831.00 IOPS, 97.00 MiB/s 00:22:39.749 Latency(us) 00:22:39.749 [2024-10-08T16:30:33.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.749 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:39.750 NVMe0n1 : 1.00 24844.47 97.05 0.00 0.00 5145.65 3183.18 9050.21 00:22:39.750 [2024-10-08T16:30:33.073Z] =================================================================================================================== 00:22:39.750 [2024-10-08T16:30:33.073Z] Total : 24844.47 97.05 0.00 0.00 5145.65 3183.18 9050.21 00:22:39.750 Received shutdown signal, test time was about 1.000000 seconds 00:22:39.750 00:22:39.750 Latency(us) 00:22:39.750 [2024-10-08T16:30:33.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.750 [2024-10-08T16:30:33.073Z] =================================================================================================================== 00:22:39.750 [2024-10-08T16:30:33.073Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.750 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:39.750 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:39.750 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:39.750 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:39.750 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:39.750 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:39.750 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:39.750 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:39.750 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:39.750 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:39.750 rmmod nvme_tcp 00:22:39.750 rmmod nvme_fabrics 00:22:40.009 rmmod nvme_keyring 00:22:40.009 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:40.009 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:40.009 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:40.009 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 490719 ']' 00:22:40.009 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 490719 00:22:40.009 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 490719 ']' 00:22:40.009 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 490719 00:22:40.009 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:40.009 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:40.009 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 490719 00:22:40.009 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:40.009 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:40.009 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 490719' 00:22:40.009 killing process with pid 490719 00:22:40.009 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 490719 00:22:40.009 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 490719 00:22:40.268 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:40.268 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:40.268 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:40.268 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:40.268 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:22:40.268 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:40.268 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:22:40.268 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:40.268 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:40.268 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.268 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.268 18:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.173 18:30:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:42.173 00:22:42.173 real 0m12.490s 00:22:42.173 user 0m16.886s 00:22:42.173 sys 0m5.372s 00:22:42.173 18:30:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:42.173 18:30:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:42.173 ************************************ 00:22:42.173 END TEST nvmf_multicontroller 00:22:42.173 ************************************ 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.432 ************************************ 00:22:42.432 START TEST nvmf_aer 00:22:42.432 ************************************ 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:42.432 * Looking for test storage... 00:22:42.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:42.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.432 --rc genhtml_branch_coverage=1 00:22:42.432 --rc genhtml_function_coverage=1 00:22:42.432 --rc genhtml_legend=1 00:22:42.432 --rc geninfo_all_blocks=1 00:22:42.432 --rc geninfo_unexecuted_blocks=1 00:22:42.432 00:22:42.432 ' 00:22:42.432 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:42.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.433 --rc genhtml_branch_coverage=1 00:22:42.433 --rc genhtml_function_coverage=1 00:22:42.433 --rc genhtml_legend=1 00:22:42.433 --rc geninfo_all_blocks=1 00:22:42.433 --rc geninfo_unexecuted_blocks=1 00:22:42.433 00:22:42.433 ' 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:42.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.433 --rc genhtml_branch_coverage=1 00:22:42.433 --rc genhtml_function_coverage=1 00:22:42.433 --rc genhtml_legend=1 00:22:42.433 --rc geninfo_all_blocks=1 00:22:42.433 --rc geninfo_unexecuted_blocks=1 00:22:42.433 00:22:42.433 ' 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:42.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.433 --rc genhtml_branch_coverage=1 00:22:42.433 --rc genhtml_function_coverage=1 00:22:42.433 --rc genhtml_legend=1 00:22:42.433 --rc geninfo_all_blocks=1 00:22:42.433 --rc geninfo_unexecuted_blocks=1 00:22:42.433 00:22:42.433 ' 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:42.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:42.433 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:42.693 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:42.693 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:42.693 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.693 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:42.693 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:42.693 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:42.693 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.693 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.693 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.693 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:42.693 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:42.693 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:42.693 18:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:47.962 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:47.962 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:47.962 Found net devices under 0000:86:00.0: cvl_0_0 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:47.962 Found net devices under 0000:86:00.1: cvl_0_1 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.962 18:30:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.962 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.962 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.962 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:47.962 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.962 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.962 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.962 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:47.962 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:47.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:22:47.962 00:22:47.962 --- 10.0.0.2 ping statistics --- 00:22:47.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.962 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:22:47.962 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:22:47.962 00:22:47.962 --- 10.0.0.1 ping statistics --- 00:22:47.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.963 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=494751 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 494751 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 494751 ']' 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:47.963 18:30:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:47.963 [2024-10-08 18:30:41.260768] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:22:47.963 [2024-10-08 18:30:41.260814] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.222 [2024-10-08 18:30:41.329723] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:48.222 [2024-10-08 18:30:41.400656] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.222 [2024-10-08 18:30:41.400697] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.222 [2024-10-08 18:30:41.400705] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.222 [2024-10-08 18:30:41.400711] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.222 [2024-10-08 18:30:41.400716] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.222 [2024-10-08 18:30:41.402319] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.222 [2024-10-08 18:30:41.402426] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.222 [2024-10-08 18:30:41.402509] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.222 [2024-10-08 18:30:41.402511] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.790 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:48.790 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:22:48.790 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:48.790 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:48.790 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.049 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.049 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.049 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.049 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.049 [2024-10-08 18:30:42.136214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.049 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.049 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:49.049 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.050 Malloc0 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.050 [2024-10-08 18:30:42.187796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.050 [ 00:22:49.050 { 00:22:49.050 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:49.050 "subtype": "Discovery", 00:22:49.050 "listen_addresses": [], 00:22:49.050 "allow_any_host": true, 00:22:49.050 "hosts": [] 00:22:49.050 }, 00:22:49.050 { 00:22:49.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.050 "subtype": "NVMe", 00:22:49.050 "listen_addresses": [ 00:22:49.050 { 00:22:49.050 "trtype": "TCP", 00:22:49.050 "adrfam": "IPv4", 00:22:49.050 "traddr": "10.0.0.2", 00:22:49.050 "trsvcid": "4420" 00:22:49.050 } 00:22:49.050 ], 00:22:49.050 "allow_any_host": true, 00:22:49.050 "hosts": [], 00:22:49.050 "serial_number": "SPDK00000000000001", 00:22:49.050 "model_number": "SPDK bdev Controller", 00:22:49.050 "max_namespaces": 2, 00:22:49.050 "min_cntlid": 1, 00:22:49.050 "max_cntlid": 65519, 00:22:49.050 "namespaces": [ 00:22:49.050 { 00:22:49.050 "nsid": 1, 00:22:49.050 "bdev_name": "Malloc0", 00:22:49.050 "name": "Malloc0", 00:22:49.050 "nguid": "CD68EA44F630464F8E33874F65A94892", 00:22:49.050 "uuid": "cd68ea44-f630-464f-8e33-874f65a94892" 00:22:49.050 } 00:22:49.050 ] 00:22:49.050 } 00:22:49.050 ] 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=494998 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:49.050 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.309 Malloc1 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.309 Asynchronous Event Request test 00:22:49.309 Attaching to 10.0.0.2 00:22:49.309 Attached to 10.0.0.2 00:22:49.309 Registering asynchronous event callbacks... 00:22:49.309 Starting namespace attribute notice tests for all controllers... 00:22:49.309 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:49.309 aer_cb - Changed Namespace 00:22:49.309 Cleaning up... 00:22:49.309 [ 00:22:49.309 { 00:22:49.309 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:49.309 "subtype": "Discovery", 00:22:49.309 "listen_addresses": [], 00:22:49.309 "allow_any_host": true, 00:22:49.309 "hosts": [] 00:22:49.309 }, 00:22:49.309 { 00:22:49.309 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.309 "subtype": "NVMe", 00:22:49.309 "listen_addresses": [ 00:22:49.309 { 00:22:49.309 "trtype": "TCP", 00:22:49.309 "adrfam": "IPv4", 00:22:49.309 "traddr": "10.0.0.2", 00:22:49.309 "trsvcid": "4420" 00:22:49.309 } 00:22:49.309 ], 00:22:49.309 "allow_any_host": true, 00:22:49.309 "hosts": [], 00:22:49.309 "serial_number": "SPDK00000000000001", 00:22:49.309 "model_number": "SPDK bdev Controller", 00:22:49.309 "max_namespaces": 2, 00:22:49.309 "min_cntlid": 1, 00:22:49.309 "max_cntlid": 65519, 00:22:49.309 "namespaces": [ 00:22:49.309 { 00:22:49.309 "nsid": 1, 00:22:49.309 "bdev_name": "Malloc0", 00:22:49.309 "name": "Malloc0", 00:22:49.309 "nguid": "CD68EA44F630464F8E33874F65A94892", 00:22:49.309 "uuid": "cd68ea44-f630-464f-8e33-874f65a94892" 00:22:49.309 }, 00:22:49.309 { 00:22:49.309 "nsid": 2, 00:22:49.309 "bdev_name": "Malloc1", 00:22:49.309 "name": "Malloc1", 00:22:49.309 "nguid": "6C0FE8BB8F03413688E1B10FFE330B6B", 00:22:49.309 "uuid": "6c0fe8bb-8f03-4136-88e1-b10ffe330b6b" 00:22:49.309 } 00:22:49.309 ] 00:22:49.309 } 00:22:49.309 ] 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 494998 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:49.309 rmmod nvme_tcp 00:22:49.309 rmmod nvme_fabrics 00:22:49.309 rmmod nvme_keyring 00:22:49.309 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:49.310 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:49.310 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:49.310 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 494751 ']' 00:22:49.310 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 494751 00:22:49.310 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 494751 ']' 00:22:49.310 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 494751 00:22:49.310 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:22:49.310 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 494751 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 494751' 00:22:49.569 killing process with pid 494751 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 494751 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 494751 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.569 18:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.123 18:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:52.123 00:22:52.123 real 0m9.395s 00:22:52.123 user 0m7.389s 00:22:52.123 sys 0m4.524s 00:22:52.123 18:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:52.123 18:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.123 ************************************ 00:22:52.123 END TEST nvmf_aer 00:22:52.123 ************************************ 00:22:52.123 18:30:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:52.123 18:30:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:52.123 18:30:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:52.123 18:30:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.123 ************************************ 00:22:52.123 START TEST nvmf_async_init 00:22:52.123 ************************************ 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:52.123 * Looking for test storage... 00:22:52.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:52.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.123 --rc genhtml_branch_coverage=1 00:22:52.123 --rc genhtml_function_coverage=1 00:22:52.123 --rc genhtml_legend=1 00:22:52.123 --rc geninfo_all_blocks=1 00:22:52.123 --rc geninfo_unexecuted_blocks=1 00:22:52.123 00:22:52.123 ' 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:52.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.123 --rc genhtml_branch_coverage=1 00:22:52.123 --rc genhtml_function_coverage=1 00:22:52.123 --rc genhtml_legend=1 00:22:52.123 --rc geninfo_all_blocks=1 00:22:52.123 --rc geninfo_unexecuted_blocks=1 00:22:52.123 00:22:52.123 ' 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:52.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.123 --rc genhtml_branch_coverage=1 00:22:52.123 --rc genhtml_function_coverage=1 00:22:52.123 --rc genhtml_legend=1 00:22:52.123 --rc geninfo_all_blocks=1 00:22:52.123 --rc geninfo_unexecuted_blocks=1 00:22:52.123 00:22:52.123 ' 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:52.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.123 --rc genhtml_branch_coverage=1 00:22:52.123 --rc genhtml_function_coverage=1 00:22:52.123 --rc genhtml_legend=1 00:22:52.123 --rc geninfo_all_blocks=1 00:22:52.123 --rc geninfo_unexecuted_blocks=1 00:22:52.123 00:22:52.123 ' 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.123 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:52.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=fda1d365a72741c7b80b1b768dafa677 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:52.124 18:30:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:58.692 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.692 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:58.692 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:58.692 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:58.692 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:58.692 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:58.692 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:58.692 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:58.693 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:58.693 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:58.693 Found net devices under 0000:86:00.0: cvl_0_0 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:58.693 Found net devices under 0000:86:00.1: cvl_0_1 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:58.693 18:30:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:58.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:22:58.693 00:22:58.693 --- 10.0.0.2 ping statistics --- 00:22:58.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.693 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:22:58.693 00:22:58.693 --- 10.0.0.1 ping statistics --- 00:22:58.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.693 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=498527 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 498527 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 498527 ']' 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.693 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:58.694 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.694 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:58.694 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:58.694 [2024-10-08 18:30:51.172834] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:22:58.694 [2024-10-08 18:30:51.172887] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.694 [2024-10-08 18:30:51.246783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.694 [2024-10-08 18:30:51.325306] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.694 [2024-10-08 18:30:51.325341] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.694 [2024-10-08 18:30:51.325349] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.694 [2024-10-08 18:30:51.325355] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.694 [2024-10-08 18:30:51.325360] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.694 [2024-10-08 18:30:51.325915] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.694 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.694 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:22:58.694 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:58.694 18:30:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.694 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:58.953 [2024-10-08 18:30:52.038999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:58.953 null0 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g fda1d365a72741c7b80b1b768dafa677 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:58.953 [2024-10-08 18:30:52.083229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.953 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.212 nvme0n1 00:22:59.212 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.212 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:59.212 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.212 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.212 [ 00:22:59.212 { 00:22:59.212 "name": "nvme0n1", 00:22:59.212 "aliases": [ 00:22:59.212 "fda1d365-a727-41c7-b80b-1b768dafa677" 00:22:59.212 ], 00:22:59.212 "product_name": "NVMe disk", 00:22:59.212 "block_size": 512, 00:22:59.212 "num_blocks": 2097152, 00:22:59.212 "uuid": "fda1d365-a727-41c7-b80b-1b768dafa677", 00:22:59.212 "numa_id": 1, 00:22:59.212 "assigned_rate_limits": { 00:22:59.212 "rw_ios_per_sec": 0, 00:22:59.212 "rw_mbytes_per_sec": 0, 00:22:59.212 "r_mbytes_per_sec": 0, 00:22:59.212 "w_mbytes_per_sec": 0 00:22:59.212 }, 00:22:59.212 "claimed": false, 00:22:59.212 "zoned": false, 00:22:59.212 "supported_io_types": { 00:22:59.212 "read": true, 00:22:59.212 "write": true, 00:22:59.212 "unmap": false, 00:22:59.212 "flush": true, 00:22:59.212 "reset": true, 00:22:59.212 "nvme_admin": true, 00:22:59.212 "nvme_io": true, 00:22:59.212 "nvme_io_md": false, 00:22:59.212 "write_zeroes": true, 00:22:59.212 "zcopy": false, 00:22:59.212 "get_zone_info": false, 00:22:59.212 "zone_management": false, 00:22:59.212 "zone_append": false, 00:22:59.212 "compare": true, 00:22:59.212 "compare_and_write": true, 00:22:59.212 "abort": true, 00:22:59.212 "seek_hole": false, 00:22:59.212 "seek_data": false, 00:22:59.212 "copy": true, 00:22:59.212 "nvme_iov_md": false 00:22:59.212 }, 00:22:59.212 "memory_domains": [ 00:22:59.212 { 00:22:59.212 "dma_device_id": "system", 00:22:59.212 "dma_device_type": 1 00:22:59.212 } 00:22:59.212 ], 00:22:59.212 "driver_specific": { 00:22:59.212 "nvme": [ 00:22:59.212 { 00:22:59.212 "trid": { 00:22:59.212 "trtype": "TCP", 00:22:59.212 "adrfam": "IPv4", 00:22:59.212 "traddr": "10.0.0.2", 00:22:59.212 "trsvcid": "4420", 00:22:59.212 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:59.212 }, 00:22:59.212 "ctrlr_data": { 00:22:59.212 "cntlid": 1, 00:22:59.212 "vendor_id": "0x8086", 00:22:59.212 "model_number": "SPDK bdev Controller", 00:22:59.212 "serial_number": "00000000000000000000", 00:22:59.212 "firmware_revision": "25.01", 00:22:59.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:59.212 "oacs": { 00:22:59.212 "security": 0, 00:22:59.212 "format": 0, 00:22:59.212 "firmware": 0, 00:22:59.212 "ns_manage": 0 00:22:59.212 }, 00:22:59.212 "multi_ctrlr": true, 00:22:59.212 "ana_reporting": false 00:22:59.212 }, 00:22:59.212 "vs": { 00:22:59.212 "nvme_version": "1.3" 00:22:59.212 }, 00:22:59.212 "ns_data": { 00:22:59.212 "id": 1, 00:22:59.212 "can_share": true 00:22:59.212 } 00:22:59.212 } 00:22:59.212 ], 00:22:59.213 "mp_policy": "active_passive" 00:22:59.213 } 00:22:59.213 } 00:22:59.213 ] 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.213 [2024-10-08 18:30:52.344752] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.213 [2024-10-08 18:30:52.344807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de060 (9): Bad file descriptor 00:22:59.213 [2024-10-08 18:30:52.476456] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.213 [ 00:22:59.213 { 00:22:59.213 "name": "nvme0n1", 00:22:59.213 "aliases": [ 00:22:59.213 "fda1d365-a727-41c7-b80b-1b768dafa677" 00:22:59.213 ], 00:22:59.213 "product_name": "NVMe disk", 00:22:59.213 "block_size": 512, 00:22:59.213 "num_blocks": 2097152, 00:22:59.213 "uuid": "fda1d365-a727-41c7-b80b-1b768dafa677", 00:22:59.213 "numa_id": 1, 00:22:59.213 "assigned_rate_limits": { 00:22:59.213 "rw_ios_per_sec": 0, 00:22:59.213 "rw_mbytes_per_sec": 0, 00:22:59.213 "r_mbytes_per_sec": 0, 00:22:59.213 "w_mbytes_per_sec": 0 00:22:59.213 }, 00:22:59.213 "claimed": false, 00:22:59.213 "zoned": false, 00:22:59.213 "supported_io_types": { 00:22:59.213 "read": true, 00:22:59.213 "write": true, 00:22:59.213 "unmap": false, 00:22:59.213 "flush": true, 00:22:59.213 "reset": true, 00:22:59.213 "nvme_admin": true, 00:22:59.213 "nvme_io": true, 00:22:59.213 "nvme_io_md": false, 00:22:59.213 "write_zeroes": true, 00:22:59.213 "zcopy": false, 00:22:59.213 "get_zone_info": false, 00:22:59.213 "zone_management": false, 00:22:59.213 "zone_append": false, 00:22:59.213 "compare": true, 00:22:59.213 "compare_and_write": true, 00:22:59.213 "abort": true, 00:22:59.213 "seek_hole": false, 00:22:59.213 "seek_data": false, 00:22:59.213 "copy": true, 00:22:59.213 "nvme_iov_md": false 00:22:59.213 }, 00:22:59.213 "memory_domains": [ 00:22:59.213 { 00:22:59.213 "dma_device_id": "system", 00:22:59.213 "dma_device_type": 1 00:22:59.213 } 00:22:59.213 ], 00:22:59.213 "driver_specific": { 00:22:59.213 "nvme": [ 00:22:59.213 { 00:22:59.213 "trid": { 00:22:59.213 "trtype": "TCP", 00:22:59.213 "adrfam": "IPv4", 00:22:59.213 "traddr": "10.0.0.2", 00:22:59.213 "trsvcid": "4420", 00:22:59.213 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:59.213 }, 00:22:59.213 "ctrlr_data": { 00:22:59.213 "cntlid": 2, 00:22:59.213 "vendor_id": "0x8086", 00:22:59.213 "model_number": "SPDK bdev Controller", 00:22:59.213 "serial_number": "00000000000000000000", 00:22:59.213 "firmware_revision": "25.01", 00:22:59.213 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:59.213 "oacs": { 00:22:59.213 "security": 0, 00:22:59.213 "format": 0, 00:22:59.213 "firmware": 0, 00:22:59.213 "ns_manage": 0 00:22:59.213 }, 00:22:59.213 "multi_ctrlr": true, 00:22:59.213 "ana_reporting": false 00:22:59.213 }, 00:22:59.213 "vs": { 00:22:59.213 "nvme_version": "1.3" 00:22:59.213 }, 00:22:59.213 "ns_data": { 00:22:59.213 "id": 1, 00:22:59.213 "can_share": true 00:22:59.213 } 00:22:59.213 } 00:22:59.213 ], 00:22:59.213 "mp_policy": "active_passive" 00:22:59.213 } 00:22:59.213 } 00:22:59.213 ] 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.6tgXQMDv4u 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.6tgXQMDv4u 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.6tgXQMDv4u 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.213 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.472 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:59.472 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.472 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.472 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.472 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:59.472 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.472 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.472 [2024-10-08 18:30:52.545356] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:59.472 [2024-10-08 18:30:52.545450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.473 [2024-10-08 18:30:52.569442] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.473 nvme0n1 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.473 [ 00:22:59.473 { 00:22:59.473 "name": "nvme0n1", 00:22:59.473 "aliases": [ 00:22:59.473 "fda1d365-a727-41c7-b80b-1b768dafa677" 00:22:59.473 ], 00:22:59.473 "product_name": "NVMe disk", 00:22:59.473 "block_size": 512, 00:22:59.473 "num_blocks": 2097152, 00:22:59.473 "uuid": "fda1d365-a727-41c7-b80b-1b768dafa677", 00:22:59.473 "numa_id": 1, 00:22:59.473 "assigned_rate_limits": { 00:22:59.473 "rw_ios_per_sec": 0, 00:22:59.473 "rw_mbytes_per_sec": 0, 00:22:59.473 "r_mbytes_per_sec": 0, 00:22:59.473 "w_mbytes_per_sec": 0 00:22:59.473 }, 00:22:59.473 "claimed": false, 00:22:59.473 "zoned": false, 00:22:59.473 "supported_io_types": { 00:22:59.473 "read": true, 00:22:59.473 "write": true, 00:22:59.473 "unmap": false, 00:22:59.473 "flush": true, 00:22:59.473 "reset": true, 00:22:59.473 "nvme_admin": true, 00:22:59.473 "nvme_io": true, 00:22:59.473 "nvme_io_md": false, 00:22:59.473 "write_zeroes": true, 00:22:59.473 "zcopy": false, 00:22:59.473 "get_zone_info": false, 00:22:59.473 "zone_management": false, 00:22:59.473 "zone_append": false, 00:22:59.473 "compare": true, 00:22:59.473 "compare_and_write": true, 00:22:59.473 "abort": true, 00:22:59.473 "seek_hole": false, 00:22:59.473 "seek_data": false, 00:22:59.473 "copy": true, 00:22:59.473 "nvme_iov_md": false 00:22:59.473 }, 00:22:59.473 "memory_domains": [ 00:22:59.473 { 00:22:59.473 "dma_device_id": "system", 00:22:59.473 "dma_device_type": 1 00:22:59.473 } 00:22:59.473 ], 00:22:59.473 "driver_specific": { 00:22:59.473 "nvme": [ 00:22:59.473 { 00:22:59.473 "trid": { 00:22:59.473 "trtype": "TCP", 00:22:59.473 "adrfam": "IPv4", 00:22:59.473 "traddr": "10.0.0.2", 00:22:59.473 "trsvcid": "4421", 00:22:59.473 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:59.473 }, 00:22:59.473 "ctrlr_data": { 00:22:59.473 "cntlid": 3, 00:22:59.473 "vendor_id": "0x8086", 00:22:59.473 "model_number": "SPDK bdev Controller", 00:22:59.473 "serial_number": "00000000000000000000", 00:22:59.473 "firmware_revision": "25.01", 00:22:59.473 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:59.473 "oacs": { 00:22:59.473 "security": 0, 00:22:59.473 "format": 0, 00:22:59.473 "firmware": 0, 00:22:59.473 "ns_manage": 0 00:22:59.473 }, 00:22:59.473 "multi_ctrlr": true, 00:22:59.473 "ana_reporting": false 00:22:59.473 }, 00:22:59.473 "vs": { 00:22:59.473 "nvme_version": "1.3" 00:22:59.473 }, 00:22:59.473 "ns_data": { 00:22:59.473 "id": 1, 00:22:59.473 "can_share": true 00:22:59.473 } 00:22:59.473 } 00:22:59.473 ], 00:22:59.473 "mp_policy": "active_passive" 00:22:59.473 } 00:22:59.473 } 00:22:59.473 ] 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.6tgXQMDv4u 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:59.473 rmmod nvme_tcp 00:22:59.473 rmmod nvme_fabrics 00:22:59.473 rmmod nvme_keyring 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 498527 ']' 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 498527 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 498527 ']' 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 498527 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 498527 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 498527' 00:22:59.473 killing process with pid 498527 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 498527 00:22:59.473 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 498527 00:22:59.732 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:59.732 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:59.732 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:59.732 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:59.732 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:22:59.732 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:59.732 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:22:59.732 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:59.732 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:59.732 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.732 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.732 18:30:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:02.267 00:23:02.267 real 0m10.013s 00:23:02.267 user 0m3.805s 00:23:02.267 sys 0m4.795s 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:02.267 ************************************ 00:23:02.267 END TEST nvmf_async_init 00:23:02.267 ************************************ 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.267 ************************************ 00:23:02.267 START TEST dma 00:23:02.267 ************************************ 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:02.267 * Looking for test storage... 00:23:02.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:02.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.267 --rc genhtml_branch_coverage=1 00:23:02.267 --rc genhtml_function_coverage=1 00:23:02.267 --rc genhtml_legend=1 00:23:02.267 --rc geninfo_all_blocks=1 00:23:02.267 --rc geninfo_unexecuted_blocks=1 00:23:02.267 00:23:02.267 ' 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:02.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.267 --rc genhtml_branch_coverage=1 00:23:02.267 --rc genhtml_function_coverage=1 00:23:02.267 --rc genhtml_legend=1 00:23:02.267 --rc geninfo_all_blocks=1 00:23:02.267 --rc geninfo_unexecuted_blocks=1 00:23:02.267 00:23:02.267 ' 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:02.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.267 --rc genhtml_branch_coverage=1 00:23:02.267 --rc genhtml_function_coverage=1 00:23:02.267 --rc genhtml_legend=1 00:23:02.267 --rc geninfo_all_blocks=1 00:23:02.267 --rc geninfo_unexecuted_blocks=1 00:23:02.267 00:23:02.267 ' 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:02.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.267 --rc genhtml_branch_coverage=1 00:23:02.267 --rc genhtml_function_coverage=1 00:23:02.267 --rc genhtml_legend=1 00:23:02.267 --rc geninfo_all_blocks=1 00:23:02.267 --rc geninfo_unexecuted_blocks=1 00:23:02.267 00:23:02.267 ' 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:02.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:02.267 00:23:02.267 real 0m0.211s 00:23:02.267 user 0m0.129s 00:23:02.267 sys 0m0.096s 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:02.267 ************************************ 00:23:02.267 END TEST dma 00:23:02.267 ************************************ 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.267 ************************************ 00:23:02.267 START TEST nvmf_identify 00:23:02.267 ************************************ 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:02.267 * Looking for test storage... 00:23:02.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:02.267 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:02.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.268 --rc genhtml_branch_coverage=1 00:23:02.268 --rc genhtml_function_coverage=1 00:23:02.268 --rc genhtml_legend=1 00:23:02.268 --rc geninfo_all_blocks=1 00:23:02.268 --rc geninfo_unexecuted_blocks=1 00:23:02.268 00:23:02.268 ' 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:02.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.268 --rc genhtml_branch_coverage=1 00:23:02.268 --rc genhtml_function_coverage=1 00:23:02.268 --rc genhtml_legend=1 00:23:02.268 --rc geninfo_all_blocks=1 00:23:02.268 --rc geninfo_unexecuted_blocks=1 00:23:02.268 00:23:02.268 ' 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:02.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.268 --rc genhtml_branch_coverage=1 00:23:02.268 --rc genhtml_function_coverage=1 00:23:02.268 --rc genhtml_legend=1 00:23:02.268 --rc geninfo_all_blocks=1 00:23:02.268 --rc geninfo_unexecuted_blocks=1 00:23:02.268 00:23:02.268 ' 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:02.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.268 --rc genhtml_branch_coverage=1 00:23:02.268 --rc genhtml_function_coverage=1 00:23:02.268 --rc genhtml_legend=1 00:23:02.268 --rc geninfo_all_blocks=1 00:23:02.268 --rc geninfo_unexecuted_blocks=1 00:23:02.268 00:23:02.268 ' 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:02.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:02.268 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:02.526 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.526 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:02.526 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:02.526 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:02.526 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.526 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.526 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.526 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:02.526 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:02.526 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:02.526 18:30:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:07.800 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.800 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:07.800 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:07.800 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:07.800 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:07.800 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:07.800 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:07.800 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:07.800 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:07.800 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:07.800 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:07.800 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:07.801 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:07.801 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:07.801 Found net devices under 0000:86:00.0: cvl_0_0 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:07.801 Found net devices under 0000:86:00.1: cvl_0_1 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.801 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:08.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:23:08.060 00:23:08.060 --- 10.0.0.2 ping statistics --- 00:23:08.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.060 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:08.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:23:08.060 00:23:08.060 --- 10.0.0.1 ping statistics --- 00:23:08.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.060 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:08.060 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:08.319 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:08.319 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:08.319 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:08.319 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=502366 00:23:08.319 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:08.319 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:08.319 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 502366 00:23:08.319 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 502366 ']' 00:23:08.319 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.319 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:08.319 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.319 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:08.319 18:31:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:08.319 [2024-10-08 18:31:01.459917] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:23:08.319 [2024-10-08 18:31:01.459967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.319 [2024-10-08 18:31:01.534033] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:08.319 [2024-10-08 18:31:01.616813] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.319 [2024-10-08 18:31:01.616852] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.319 [2024-10-08 18:31:01.616859] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.319 [2024-10-08 18:31:01.616866] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.319 [2024-10-08 18:31:01.616871] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.319 [2024-10-08 18:31:01.618476] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.319 [2024-10-08 18:31:01.618585] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.319 [2024-10-08 18:31:01.618691] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.319 [2024-10-08 18:31:01.618692] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:09.260 [2024-10-08 18:31:02.297565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:09.260 Malloc0 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.260 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.261 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.261 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:09.261 [2024-10-08 18:31:02.381026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.261 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.261 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:09.261 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.261 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:09.261 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.261 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:09.261 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.261 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:09.261 [ 00:23:09.261 { 00:23:09.261 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:09.261 "subtype": "Discovery", 00:23:09.261 "listen_addresses": [ 00:23:09.261 { 00:23:09.261 "trtype": "TCP", 00:23:09.261 "adrfam": "IPv4", 00:23:09.261 "traddr": "10.0.0.2", 00:23:09.261 "trsvcid": "4420" 00:23:09.261 } 00:23:09.261 ], 00:23:09.261 "allow_any_host": true, 00:23:09.261 "hosts": [] 00:23:09.261 }, 00:23:09.261 { 00:23:09.261 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.261 "subtype": "NVMe", 00:23:09.261 "listen_addresses": [ 00:23:09.261 { 00:23:09.261 "trtype": "TCP", 00:23:09.261 "adrfam": "IPv4", 00:23:09.261 "traddr": "10.0.0.2", 00:23:09.261 "trsvcid": "4420" 00:23:09.261 } 00:23:09.261 ], 00:23:09.261 "allow_any_host": true, 00:23:09.261 "hosts": [], 00:23:09.261 "serial_number": "SPDK00000000000001", 00:23:09.261 "model_number": "SPDK bdev Controller", 00:23:09.261 "max_namespaces": 32, 00:23:09.261 "min_cntlid": 1, 00:23:09.261 "max_cntlid": 65519, 00:23:09.261 "namespaces": [ 00:23:09.261 { 00:23:09.261 "nsid": 1, 00:23:09.261 "bdev_name": "Malloc0", 00:23:09.261 "name": "Malloc0", 00:23:09.261 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:09.261 "eui64": "ABCDEF0123456789", 00:23:09.261 "uuid": "86aa4e7d-b83c-45d2-9ef8-e026c9649f3e" 00:23:09.261 } 00:23:09.261 ] 00:23:09.261 } 00:23:09.261 ] 00:23:09.261 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.261 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:09.261 [2024-10-08 18:31:02.434748] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:23:09.261 [2024-10-08 18:31:02.434796] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid502602 ] 00:23:09.261 [2024-10-08 18:31:02.461871] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:09.261 [2024-10-08 18:31:02.461921] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:09.261 [2024-10-08 18:31:02.461926] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:09.261 [2024-10-08 18:31:02.461939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:09.261 [2024-10-08 18:31:02.461947] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:09.261 [2024-10-08 18:31:02.465655] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:09.261 [2024-10-08 18:31:02.465695] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xed0760 0 00:23:09.261 [2024-10-08 18:31:02.473390] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:09.261 [2024-10-08 18:31:02.473406] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:09.261 [2024-10-08 18:31:02.473411] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:09.261 [2024-10-08 18:31:02.473414] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:09.261 [2024-10-08 18:31:02.473448] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.261 [2024-10-08 18:31:02.473454] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.261 [2024-10-08 18:31:02.473458] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed0760) 00:23:09.261 [2024-10-08 18:31:02.473473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:09.261 [2024-10-08 18:31:02.473490] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30480, cid 0, qid 0 00:23:09.261 [2024-10-08 18:31:02.481387] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.261 [2024-10-08 18:31:02.481396] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.261 [2024-10-08 18:31:02.481399] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.261 [2024-10-08 18:31:02.481403] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30480) on tqpair=0xed0760 00:23:09.261 [2024-10-08 18:31:02.481413] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:09.261 [2024-10-08 18:31:02.481419] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:09.261 [2024-10-08 18:31:02.481424] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:09.261 [2024-10-08 18:31:02.481436] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.261 [2024-10-08 18:31:02.481440] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.261 [2024-10-08 18:31:02.481443] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed0760) 00:23:09.261 [2024-10-08 18:31:02.481450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.261 [2024-10-08 18:31:02.481463] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30480, cid 0, qid 0 00:23:09.261 [2024-10-08 18:31:02.481619] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.261 [2024-10-08 18:31:02.481625] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.261 [2024-10-08 18:31:02.481628] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.261 [2024-10-08 18:31:02.481632] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30480) on tqpair=0xed0760 00:23:09.261 [2024-10-08 18:31:02.481636] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:09.261 [2024-10-08 18:31:02.481643] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:09.261 [2024-10-08 18:31:02.481649] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.261 [2024-10-08 18:31:02.481653] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.261 [2024-10-08 18:31:02.481656] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed0760) 00:23:09.261 [2024-10-08 18:31:02.481662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.261 [2024-10-08 18:31:02.481672] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30480, cid 0, qid 0 00:23:09.261 [2024-10-08 18:31:02.481731] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.261 [2024-10-08 18:31:02.481736] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.261 [2024-10-08 18:31:02.481739] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.261 [2024-10-08 18:31:02.481743] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30480) on tqpair=0xed0760 00:23:09.261 [2024-10-08 18:31:02.481747] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:09.261 [2024-10-08 18:31:02.481754] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:09.261 [2024-10-08 18:31:02.481760] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.261 [2024-10-08 18:31:02.481763] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.261 [2024-10-08 18:31:02.481766] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed0760) 00:23:09.261 [2024-10-08 18:31:02.481774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.261 [2024-10-08 18:31:02.481784] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30480, cid 0, qid 0 00:23:09.261 [2024-10-08 18:31:02.481848] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.261 [2024-10-08 18:31:02.481854] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.261 [2024-10-08 18:31:02.481857] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.261 [2024-10-08 18:31:02.481860] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30480) on tqpair=0xed0760 00:23:09.261 [2024-10-08 18:31:02.481865] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:09.261 [2024-10-08 18:31:02.481873] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.261 [2024-10-08 18:31:02.481877] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.261 [2024-10-08 18:31:02.481880] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed0760) 00:23:09.261 [2024-10-08 18:31:02.481886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.261 [2024-10-08 18:31:02.481895] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30480, cid 0, qid 0 00:23:09.261 [2024-10-08 18:31:02.481969] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.261 [2024-10-08 18:31:02.481974] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.261 [2024-10-08 18:31:02.481977] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.261 [2024-10-08 18:31:02.481981] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30480) on tqpair=0xed0760 00:23:09.261 [2024-10-08 18:31:02.481985] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:09.261 [2024-10-08 18:31:02.481989] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:09.261 [2024-10-08 18:31:02.481997] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:09.261 [2024-10-08 18:31:02.482102] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:09.261 [2024-10-08 18:31:02.482107] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:09.261 [2024-10-08 18:31:02.482114] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482118] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482121] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed0760) 00:23:09.262 [2024-10-08 18:31:02.482126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.262 [2024-10-08 18:31:02.482136] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30480, cid 0, qid 0 00:23:09.262 [2024-10-08 18:31:02.482197] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.262 [2024-10-08 18:31:02.482203] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.262 [2024-10-08 18:31:02.482206] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482209] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30480) on tqpair=0xed0760 00:23:09.262 [2024-10-08 18:31:02.482213] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:09.262 [2024-10-08 18:31:02.482221] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482225] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482230] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed0760) 00:23:09.262 [2024-10-08 18:31:02.482235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.262 [2024-10-08 18:31:02.482245] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30480, cid 0, qid 0 00:23:09.262 [2024-10-08 18:31:02.482305] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.262 [2024-10-08 18:31:02.482311] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.262 [2024-10-08 18:31:02.482314] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482317] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30480) on tqpair=0xed0760 00:23:09.262 [2024-10-08 18:31:02.482321] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:09.262 [2024-10-08 18:31:02.482326] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:09.262 [2024-10-08 18:31:02.482332] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:09.262 [2024-10-08 18:31:02.482344] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:09.262 [2024-10-08 18:31:02.482353] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482356] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed0760) 00:23:09.262 [2024-10-08 18:31:02.482362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.262 [2024-10-08 18:31:02.482371] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30480, cid 0, qid 0 00:23:09.262 [2024-10-08 18:31:02.482468] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:09.262 [2024-10-08 18:31:02.482474] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:09.262 [2024-10-08 18:31:02.482477] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482481] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xed0760): datao=0, datal=4096, cccid=0 00:23:09.262 [2024-10-08 18:31:02.482485] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf30480) on tqpair(0xed0760): expected_datao=0, payload_size=4096 00:23:09.262 [2024-10-08 18:31:02.482489] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482496] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482500] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482516] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.262 [2024-10-08 18:31:02.482521] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.262 [2024-10-08 18:31:02.482524] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482528] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30480) on tqpair=0xed0760 00:23:09.262 [2024-10-08 18:31:02.482534] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:09.262 [2024-10-08 18:31:02.482538] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:09.262 [2024-10-08 18:31:02.482542] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:09.262 [2024-10-08 18:31:02.482547] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:09.262 [2024-10-08 18:31:02.482551] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:09.262 [2024-10-08 18:31:02.482557] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:09.262 [2024-10-08 18:31:02.482565] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:09.262 [2024-10-08 18:31:02.482573] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482577] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482580] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed0760) 00:23:09.262 [2024-10-08 18:31:02.482586] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:09.262 [2024-10-08 18:31:02.482597] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30480, cid 0, qid 0 00:23:09.262 [2024-10-08 18:31:02.482663] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.262 [2024-10-08 18:31:02.482668] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.262 [2024-10-08 18:31:02.482671] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482675] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30480) on tqpair=0xed0760 00:23:09.262 [2024-10-08 18:31:02.482682] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482685] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482688] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed0760) 00:23:09.262 [2024-10-08 18:31:02.482693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.262 [2024-10-08 18:31:02.482699] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482702] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482705] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xed0760) 00:23:09.262 [2024-10-08 18:31:02.482710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.262 [2024-10-08 18:31:02.482715] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482718] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482721] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xed0760) 00:23:09.262 [2024-10-08 18:31:02.482726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.262 [2024-10-08 18:31:02.482731] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482735] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482738] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.262 [2024-10-08 18:31:02.482743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.262 [2024-10-08 18:31:02.482747] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:09.262 [2024-10-08 18:31:02.482757] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:09.262 [2024-10-08 18:31:02.482763] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482766] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xed0760) 00:23:09.262 [2024-10-08 18:31:02.482772] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.262 [2024-10-08 18:31:02.482783] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30480, cid 0, qid 0 00:23:09.262 [2024-10-08 18:31:02.482790] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30600, cid 1, qid 0 00:23:09.262 [2024-10-08 18:31:02.482794] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30780, cid 2, qid 0 00:23:09.262 [2024-10-08 18:31:02.482798] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.262 [2024-10-08 18:31:02.482802] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30a80, cid 4, qid 0 00:23:09.262 [2024-10-08 18:31:02.482895] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.262 [2024-10-08 18:31:02.482901] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.262 [2024-10-08 18:31:02.482904] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482907] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30a80) on tqpair=0xed0760 00:23:09.262 [2024-10-08 18:31:02.482912] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:09.262 [2024-10-08 18:31:02.482916] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:09.262 [2024-10-08 18:31:02.482925] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.482928] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xed0760) 00:23:09.262 [2024-10-08 18:31:02.482934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.262 [2024-10-08 18:31:02.482943] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30a80, cid 4, qid 0 00:23:09.262 [2024-10-08 18:31:02.483012] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:09.262 [2024-10-08 18:31:02.483018] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:09.262 [2024-10-08 18:31:02.483021] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.483025] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xed0760): datao=0, datal=4096, cccid=4 00:23:09.262 [2024-10-08 18:31:02.483028] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf30a80) on tqpair(0xed0760): expected_datao=0, payload_size=4096 00:23:09.262 [2024-10-08 18:31:02.483032] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.483038] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.483041] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.483050] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.262 [2024-10-08 18:31:02.483056] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.262 [2024-10-08 18:31:02.483059] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.262 [2024-10-08 18:31:02.483062] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30a80) on tqpair=0xed0760 00:23:09.262 [2024-10-08 18:31:02.483073] nvme_ctrlr.c:4220:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:09.262 [2024-10-08 18:31:02.483094] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.483098] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xed0760) 00:23:09.263 [2024-10-08 18:31:02.483104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.263 [2024-10-08 18:31:02.483109] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.483113] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.483116] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xed0760) 00:23:09.263 [2024-10-08 18:31:02.483121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.263 [2024-10-08 18:31:02.483132] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30a80, cid 4, qid 0 00:23:09.263 [2024-10-08 18:31:02.483139] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30c00, cid 5, qid 0 00:23:09.263 [2024-10-08 18:31:02.483234] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:09.263 [2024-10-08 18:31:02.483240] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:09.263 [2024-10-08 18:31:02.483243] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.483246] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xed0760): datao=0, datal=1024, cccid=4 00:23:09.263 [2024-10-08 18:31:02.483250] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf30a80) on tqpair(0xed0760): expected_datao=0, payload_size=1024 00:23:09.263 [2024-10-08 18:31:02.483254] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.483259] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.483263] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.483267] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.263 [2024-10-08 18:31:02.483272] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.263 [2024-10-08 18:31:02.483275] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.483279] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30c00) on tqpair=0xed0760 00:23:09.263 [2024-10-08 18:31:02.523520] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.263 [2024-10-08 18:31:02.523533] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.263 [2024-10-08 18:31:02.523537] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.523540] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30a80) on tqpair=0xed0760 00:23:09.263 [2024-10-08 18:31:02.523557] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.523562] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xed0760) 00:23:09.263 [2024-10-08 18:31:02.523570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.263 [2024-10-08 18:31:02.523586] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30a80, cid 4, qid 0 00:23:09.263 [2024-10-08 18:31:02.523659] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:09.263 [2024-10-08 18:31:02.523665] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:09.263 [2024-10-08 18:31:02.523669] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.523672] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xed0760): datao=0, datal=3072, cccid=4 00:23:09.263 [2024-10-08 18:31:02.523676] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf30a80) on tqpair(0xed0760): expected_datao=0, payload_size=3072 00:23:09.263 [2024-10-08 18:31:02.523680] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.523686] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.523689] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.523715] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.263 [2024-10-08 18:31:02.523720] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.263 [2024-10-08 18:31:02.523724] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.523727] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30a80) on tqpair=0xed0760 00:23:09.263 [2024-10-08 18:31:02.523734] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.523738] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xed0760) 00:23:09.263 [2024-10-08 18:31:02.523743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.263 [2024-10-08 18:31:02.523761] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30a80, cid 4, qid 0 00:23:09.263 [2024-10-08 18:31:02.523832] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:09.263 [2024-10-08 18:31:02.523838] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:09.263 [2024-10-08 18:31:02.523841] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.523844] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xed0760): datao=0, datal=8, cccid=4 00:23:09.263 [2024-10-08 18:31:02.523848] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf30a80) on tqpair(0xed0760): expected_datao=0, payload_size=8 00:23:09.263 [2024-10-08 18:31:02.523852] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.523857] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.523860] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.567394] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.263 [2024-10-08 18:31:02.567408] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.263 [2024-10-08 18:31:02.567412] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.263 [2024-10-08 18:31:02.567416] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30a80) on tqpair=0xed0760 00:23:09.263 ===================================================== 00:23:09.263 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:09.263 ===================================================== 00:23:09.263 Controller Capabilities/Features 00:23:09.263 ================================ 00:23:09.263 Vendor ID: 0000 00:23:09.263 Subsystem Vendor ID: 0000 00:23:09.263 Serial Number: .................... 00:23:09.263 Model Number: ........................................ 00:23:09.263 Firmware Version: 25.01 00:23:09.263 Recommended Arb Burst: 0 00:23:09.263 IEEE OUI Identifier: 00 00 00 00:23:09.263 Multi-path I/O 00:23:09.263 May have multiple subsystem ports: No 00:23:09.263 May have multiple controllers: No 00:23:09.263 Associated with SR-IOV VF: No 00:23:09.263 Max Data Transfer Size: 131072 00:23:09.263 Max Number of Namespaces: 0 00:23:09.263 Max Number of I/O Queues: 1024 00:23:09.263 NVMe Specification Version (VS): 1.3 00:23:09.263 NVMe Specification Version (Identify): 1.3 00:23:09.263 Maximum Queue Entries: 128 00:23:09.263 Contiguous Queues Required: Yes 00:23:09.263 Arbitration Mechanisms Supported 00:23:09.263 Weighted Round Robin: Not Supported 00:23:09.263 Vendor Specific: Not Supported 00:23:09.263 Reset Timeout: 15000 ms 00:23:09.263 Doorbell Stride: 4 bytes 00:23:09.263 NVM Subsystem Reset: Not Supported 00:23:09.263 Command Sets Supported 00:23:09.263 NVM Command Set: Supported 00:23:09.263 Boot Partition: Not Supported 00:23:09.263 Memory Page Size Minimum: 4096 bytes 00:23:09.263 Memory Page Size Maximum: 4096 bytes 00:23:09.263 Persistent Memory Region: Not Supported 00:23:09.263 Optional Asynchronous Events Supported 00:23:09.263 Namespace Attribute Notices: Not Supported 00:23:09.263 Firmware Activation Notices: Not Supported 00:23:09.263 ANA Change Notices: Not Supported 00:23:09.263 PLE Aggregate Log Change Notices: Not Supported 00:23:09.263 LBA Status Info Alert Notices: Not Supported 00:23:09.263 EGE Aggregate Log Change Notices: Not Supported 00:23:09.263 Normal NVM Subsystem Shutdown event: Not Supported 00:23:09.263 Zone Descriptor Change Notices: Not Supported 00:23:09.263 Discovery Log Change Notices: Supported 00:23:09.263 Controller Attributes 00:23:09.263 128-bit Host Identifier: Not Supported 00:23:09.263 Non-Operational Permissive Mode: Not Supported 00:23:09.263 NVM Sets: Not Supported 00:23:09.263 Read Recovery Levels: Not Supported 00:23:09.263 Endurance Groups: Not Supported 00:23:09.263 Predictable Latency Mode: Not Supported 00:23:09.263 Traffic Based Keep ALive: Not Supported 00:23:09.263 Namespace Granularity: Not Supported 00:23:09.263 SQ Associations: Not Supported 00:23:09.263 UUID List: Not Supported 00:23:09.263 Multi-Domain Subsystem: Not Supported 00:23:09.263 Fixed Capacity Management: Not Supported 00:23:09.263 Variable Capacity Management: Not Supported 00:23:09.263 Delete Endurance Group: Not Supported 00:23:09.263 Delete NVM Set: Not Supported 00:23:09.263 Extended LBA Formats Supported: Not Supported 00:23:09.263 Flexible Data Placement Supported: Not Supported 00:23:09.263 00:23:09.263 Controller Memory Buffer Support 00:23:09.263 ================================ 00:23:09.263 Supported: No 00:23:09.263 00:23:09.263 Persistent Memory Region Support 00:23:09.263 ================================ 00:23:09.263 Supported: No 00:23:09.263 00:23:09.263 Admin Command Set Attributes 00:23:09.263 ============================ 00:23:09.263 Security Send/Receive: Not Supported 00:23:09.263 Format NVM: Not Supported 00:23:09.263 Firmware Activate/Download: Not Supported 00:23:09.263 Namespace Management: Not Supported 00:23:09.263 Device Self-Test: Not Supported 00:23:09.263 Directives: Not Supported 00:23:09.263 NVMe-MI: Not Supported 00:23:09.263 Virtualization Management: Not Supported 00:23:09.263 Doorbell Buffer Config: Not Supported 00:23:09.263 Get LBA Status Capability: Not Supported 00:23:09.263 Command & Feature Lockdown Capability: Not Supported 00:23:09.263 Abort Command Limit: 1 00:23:09.264 Async Event Request Limit: 4 00:23:09.264 Number of Firmware Slots: N/A 00:23:09.264 Firmware Slot 1 Read-Only: N/A 00:23:09.264 Firmware Activation Without Reset: N/A 00:23:09.264 Multiple Update Detection Support: N/A 00:23:09.264 Firmware Update Granularity: No Information Provided 00:23:09.264 Per-Namespace SMART Log: No 00:23:09.264 Asymmetric Namespace Access Log Page: Not Supported 00:23:09.264 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:09.264 Command Effects Log Page: Not Supported 00:23:09.264 Get Log Page Extended Data: Supported 00:23:09.264 Telemetry Log Pages: Not Supported 00:23:09.264 Persistent Event Log Pages: Not Supported 00:23:09.264 Supported Log Pages Log Page: May Support 00:23:09.264 Commands Supported & Effects Log Page: Not Supported 00:23:09.264 Feature Identifiers & Effects Log Page:May Support 00:23:09.264 NVMe-MI Commands & Effects Log Page: May Support 00:23:09.264 Data Area 4 for Telemetry Log: Not Supported 00:23:09.264 Error Log Page Entries Supported: 128 00:23:09.264 Keep Alive: Not Supported 00:23:09.264 00:23:09.264 NVM Command Set Attributes 00:23:09.264 ========================== 00:23:09.264 Submission Queue Entry Size 00:23:09.264 Max: 1 00:23:09.264 Min: 1 00:23:09.264 Completion Queue Entry Size 00:23:09.264 Max: 1 00:23:09.264 Min: 1 00:23:09.264 Number of Namespaces: 0 00:23:09.264 Compare Command: Not Supported 00:23:09.264 Write Uncorrectable Command: Not Supported 00:23:09.264 Dataset Management Command: Not Supported 00:23:09.264 Write Zeroes Command: Not Supported 00:23:09.264 Set Features Save Field: Not Supported 00:23:09.264 Reservations: Not Supported 00:23:09.264 Timestamp: Not Supported 00:23:09.264 Copy: Not Supported 00:23:09.264 Volatile Write Cache: Not Present 00:23:09.264 Atomic Write Unit (Normal): 1 00:23:09.264 Atomic Write Unit (PFail): 1 00:23:09.264 Atomic Compare & Write Unit: 1 00:23:09.264 Fused Compare & Write: Supported 00:23:09.264 Scatter-Gather List 00:23:09.264 SGL Command Set: Supported 00:23:09.264 SGL Keyed: Supported 00:23:09.264 SGL Bit Bucket Descriptor: Not Supported 00:23:09.264 SGL Metadata Pointer: Not Supported 00:23:09.264 Oversized SGL: Not Supported 00:23:09.264 SGL Metadata Address: Not Supported 00:23:09.264 SGL Offset: Supported 00:23:09.264 Transport SGL Data Block: Not Supported 00:23:09.264 Replay Protected Memory Block: Not Supported 00:23:09.264 00:23:09.264 Firmware Slot Information 00:23:09.264 ========================= 00:23:09.264 Active slot: 0 00:23:09.264 00:23:09.264 00:23:09.264 Error Log 00:23:09.264 ========= 00:23:09.264 00:23:09.264 Active Namespaces 00:23:09.264 ================= 00:23:09.264 Discovery Log Page 00:23:09.264 ================== 00:23:09.264 Generation Counter: 2 00:23:09.264 Number of Records: 2 00:23:09.264 Record Format: 0 00:23:09.264 00:23:09.264 Discovery Log Entry 0 00:23:09.264 ---------------------- 00:23:09.264 Transport Type: 3 (TCP) 00:23:09.264 Address Family: 1 (IPv4) 00:23:09.264 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:09.264 Entry Flags: 00:23:09.264 Duplicate Returned Information: 1 00:23:09.264 Explicit Persistent Connection Support for Discovery: 1 00:23:09.264 Transport Requirements: 00:23:09.264 Secure Channel: Not Required 00:23:09.264 Port ID: 0 (0x0000) 00:23:09.264 Controller ID: 65535 (0xffff) 00:23:09.264 Admin Max SQ Size: 128 00:23:09.264 Transport Service Identifier: 4420 00:23:09.264 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:09.264 Transport Address: 10.0.0.2 00:23:09.264 Discovery Log Entry 1 00:23:09.264 ---------------------- 00:23:09.264 Transport Type: 3 (TCP) 00:23:09.264 Address Family: 1 (IPv4) 00:23:09.264 Subsystem Type: 2 (NVM Subsystem) 00:23:09.264 Entry Flags: 00:23:09.264 Duplicate Returned Information: 0 00:23:09.264 Explicit Persistent Connection Support for Discovery: 0 00:23:09.264 Transport Requirements: 00:23:09.264 Secure Channel: Not Required 00:23:09.264 Port ID: 0 (0x0000) 00:23:09.264 Controller ID: 65535 (0xffff) 00:23:09.264 Admin Max SQ Size: 128 00:23:09.264 Transport Service Identifier: 4420 00:23:09.264 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:09.264 Transport Address: 10.0.0.2 [2024-10-08 18:31:02.567494] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:09.264 [2024-10-08 18:31:02.567505] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30480) on tqpair=0xed0760 00:23:09.264 [2024-10-08 18:31:02.567512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.264 [2024-10-08 18:31:02.567517] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30600) on tqpair=0xed0760 00:23:09.264 [2024-10-08 18:31:02.567521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.264 [2024-10-08 18:31:02.567525] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30780) on tqpair=0xed0760 00:23:09.264 [2024-10-08 18:31:02.567529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.264 [2024-10-08 18:31:02.567534] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.264 [2024-10-08 18:31:02.567538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.264 [2024-10-08 18:31:02.567546] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.264 [2024-10-08 18:31:02.567549] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.264 [2024-10-08 18:31:02.567553] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.264 [2024-10-08 18:31:02.567560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.264 [2024-10-08 18:31:02.567575] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.264 [2024-10-08 18:31:02.567763] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.264 [2024-10-08 18:31:02.567769] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.264 [2024-10-08 18:31:02.567772] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.264 [2024-10-08 18:31:02.567776] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.264 [2024-10-08 18:31:02.567782] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.264 [2024-10-08 18:31:02.567785] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.264 [2024-10-08 18:31:02.567788] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.264 [2024-10-08 18:31:02.567795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.264 [2024-10-08 18:31:02.567811] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.264 [2024-10-08 18:31:02.567881] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.264 [2024-10-08 18:31:02.567887] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.264 [2024-10-08 18:31:02.567890] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.264 [2024-10-08 18:31:02.567893] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.264 [2024-10-08 18:31:02.567897] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:09.264 [2024-10-08 18:31:02.567902] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:09.264 [2024-10-08 18:31:02.567910] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.264 [2024-10-08 18:31:02.567914] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.264 [2024-10-08 18:31:02.567917] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.264 [2024-10-08 18:31:02.567922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.264 [2024-10-08 18:31:02.567932] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.264 [2024-10-08 18:31:02.567996] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.264 [2024-10-08 18:31:02.568001] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.264 [2024-10-08 18:31:02.568004] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.264 [2024-10-08 18:31:02.568007] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.264 [2024-10-08 18:31:02.568016] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.264 [2024-10-08 18:31:02.568020] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.264 [2024-10-08 18:31:02.568023] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.264 [2024-10-08 18:31:02.568029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.264 [2024-10-08 18:31:02.568038] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.264 [2024-10-08 18:31:02.568101] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.264 [2024-10-08 18:31:02.568106] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.264 [2024-10-08 18:31:02.568109] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.264 [2024-10-08 18:31:02.568113] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.265 [2024-10-08 18:31:02.568121] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568124] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568127] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.265 [2024-10-08 18:31:02.568133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.265 [2024-10-08 18:31:02.568141] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.265 [2024-10-08 18:31:02.568220] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.265 [2024-10-08 18:31:02.568225] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.265 [2024-10-08 18:31:02.568228] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568232] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.265 [2024-10-08 18:31:02.568240] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568244] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568247] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.265 [2024-10-08 18:31:02.568254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.265 [2024-10-08 18:31:02.568265] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.265 [2024-10-08 18:31:02.568320] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.265 [2024-10-08 18:31:02.568326] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.265 [2024-10-08 18:31:02.568329] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568332] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.265 [2024-10-08 18:31:02.568341] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568344] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568347] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.265 [2024-10-08 18:31:02.568353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.265 [2024-10-08 18:31:02.568362] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.265 [2024-10-08 18:31:02.568426] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.265 [2024-10-08 18:31:02.568432] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.265 [2024-10-08 18:31:02.568435] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568439] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.265 [2024-10-08 18:31:02.568447] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568450] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568453] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.265 [2024-10-08 18:31:02.568459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.265 [2024-10-08 18:31:02.568469] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.265 [2024-10-08 18:31:02.568530] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.265 [2024-10-08 18:31:02.568536] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.265 [2024-10-08 18:31:02.568539] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568542] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.265 [2024-10-08 18:31:02.568550] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568554] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568557] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.265 [2024-10-08 18:31:02.568563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.265 [2024-10-08 18:31:02.568572] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.265 [2024-10-08 18:31:02.568631] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.265 [2024-10-08 18:31:02.568636] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.265 [2024-10-08 18:31:02.568639] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568643] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.265 [2024-10-08 18:31:02.568651] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568654] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568657] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.265 [2024-10-08 18:31:02.568663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.265 [2024-10-08 18:31:02.568674] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.265 [2024-10-08 18:31:02.568732] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.265 [2024-10-08 18:31:02.568737] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.265 [2024-10-08 18:31:02.568740] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568743] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.265 [2024-10-08 18:31:02.568751] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568755] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568758] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.265 [2024-10-08 18:31:02.568764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.265 [2024-10-08 18:31:02.568773] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.265 [2024-10-08 18:31:02.568833] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.265 [2024-10-08 18:31:02.568839] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.265 [2024-10-08 18:31:02.568842] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568845] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.265 [2024-10-08 18:31:02.568853] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568857] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568860] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.265 [2024-10-08 18:31:02.568865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.265 [2024-10-08 18:31:02.568874] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.265 [2024-10-08 18:31:02.568943] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.265 [2024-10-08 18:31:02.568948] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.265 [2024-10-08 18:31:02.568952] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568955] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.265 [2024-10-08 18:31:02.568964] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568967] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.568970] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.265 [2024-10-08 18:31:02.568976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.265 [2024-10-08 18:31:02.568985] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.265 [2024-10-08 18:31:02.569046] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.265 [2024-10-08 18:31:02.569052] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.265 [2024-10-08 18:31:02.569055] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.569058] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.265 [2024-10-08 18:31:02.569067] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.569070] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.569073] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.265 [2024-10-08 18:31:02.569079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.265 [2024-10-08 18:31:02.569088] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.265 [2024-10-08 18:31:02.569152] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.265 [2024-10-08 18:31:02.569158] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.265 [2024-10-08 18:31:02.569161] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.569164] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.265 [2024-10-08 18:31:02.569172] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.265 [2024-10-08 18:31:02.569176] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569179] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.266 [2024-10-08 18:31:02.569185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.266 [2024-10-08 18:31:02.569194] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.266 [2024-10-08 18:31:02.569263] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.266 [2024-10-08 18:31:02.569269] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.266 [2024-10-08 18:31:02.569272] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569275] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.266 [2024-10-08 18:31:02.569284] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569287] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569290] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.266 [2024-10-08 18:31:02.569296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.266 [2024-10-08 18:31:02.569306] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.266 [2024-10-08 18:31:02.569368] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.266 [2024-10-08 18:31:02.569373] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.266 [2024-10-08 18:31:02.569381] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569385] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.266 [2024-10-08 18:31:02.569393] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569396] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569400] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.266 [2024-10-08 18:31:02.569405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.266 [2024-10-08 18:31:02.569414] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.266 [2024-10-08 18:31:02.569475] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.266 [2024-10-08 18:31:02.569481] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.266 [2024-10-08 18:31:02.569484] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569487] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.266 [2024-10-08 18:31:02.569495] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569499] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569502] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.266 [2024-10-08 18:31:02.569508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.266 [2024-10-08 18:31:02.569517] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.266 [2024-10-08 18:31:02.569574] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.266 [2024-10-08 18:31:02.569583] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.266 [2024-10-08 18:31:02.569586] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569590] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.266 [2024-10-08 18:31:02.569598] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569601] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569604] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.266 [2024-10-08 18:31:02.569610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.266 [2024-10-08 18:31:02.569619] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.266 [2024-10-08 18:31:02.569683] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.266 [2024-10-08 18:31:02.569689] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.266 [2024-10-08 18:31:02.569692] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569695] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.266 [2024-10-08 18:31:02.569703] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569707] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569710] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.266 [2024-10-08 18:31:02.569716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.266 [2024-10-08 18:31:02.569725] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.266 [2024-10-08 18:31:02.569792] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.266 [2024-10-08 18:31:02.569798] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.266 [2024-10-08 18:31:02.569801] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569804] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.266 [2024-10-08 18:31:02.569813] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569816] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569819] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.266 [2024-10-08 18:31:02.569825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.266 [2024-10-08 18:31:02.569834] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.266 [2024-10-08 18:31:02.569897] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.266 [2024-10-08 18:31:02.569902] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.266 [2024-10-08 18:31:02.569905] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569908] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.266 [2024-10-08 18:31:02.569917] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569921] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.569924] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.266 [2024-10-08 18:31:02.569929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.266 [2024-10-08 18:31:02.569939] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.266 [2024-10-08 18:31:02.569998] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.266 [2024-10-08 18:31:02.570003] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.266 [2024-10-08 18:31:02.570008] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.570011] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.266 [2024-10-08 18:31:02.570020] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.570024] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.570027] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.266 [2024-10-08 18:31:02.570033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.266 [2024-10-08 18:31:02.570042] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.266 [2024-10-08 18:31:02.570107] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.266 [2024-10-08 18:31:02.570112] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.266 [2024-10-08 18:31:02.570116] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.570119] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.266 [2024-10-08 18:31:02.570127] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.570131] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.570134] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.266 [2024-10-08 18:31:02.570140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.266 [2024-10-08 18:31:02.570148] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.266 [2024-10-08 18:31:02.570211] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.266 [2024-10-08 18:31:02.570217] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.266 [2024-10-08 18:31:02.570220] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.570223] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.266 [2024-10-08 18:31:02.570231] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.570235] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.570238] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.266 [2024-10-08 18:31:02.570243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.266 [2024-10-08 18:31:02.570253] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.266 [2024-10-08 18:31:02.570319] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.266 [2024-10-08 18:31:02.570325] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.266 [2024-10-08 18:31:02.570328] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.570331] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.266 [2024-10-08 18:31:02.570340] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.570343] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.570347] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.266 [2024-10-08 18:31:02.570352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.266 [2024-10-08 18:31:02.570361] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.266 [2024-10-08 18:31:02.570422] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.266 [2024-10-08 18:31:02.570428] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.266 [2024-10-08 18:31:02.570431] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.570434] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.266 [2024-10-08 18:31:02.570445] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.570448] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.266 [2024-10-08 18:31:02.570452] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.266 [2024-10-08 18:31:02.570457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.266 [2024-10-08 18:31:02.570467] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.266 [2024-10-08 18:31:02.570529] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.266 [2024-10-08 18:31:02.570534] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.267 [2024-10-08 18:31:02.570537] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.570541] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.267 [2024-10-08 18:31:02.570549] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.570553] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.570556] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.267 [2024-10-08 18:31:02.570562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.267 [2024-10-08 18:31:02.570571] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.267 [2024-10-08 18:31:02.570633] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.267 [2024-10-08 18:31:02.570638] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.267 [2024-10-08 18:31:02.570641] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.570644] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.267 [2024-10-08 18:31:02.570653] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.570657] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.570660] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.267 [2024-10-08 18:31:02.570665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.267 [2024-10-08 18:31:02.570675] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.267 [2024-10-08 18:31:02.570739] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.267 [2024-10-08 18:31:02.570745] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.267 [2024-10-08 18:31:02.570748] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.570751] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.267 [2024-10-08 18:31:02.570760] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.570763] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.570766] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.267 [2024-10-08 18:31:02.570772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.267 [2024-10-08 18:31:02.570781] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.267 [2024-10-08 18:31:02.570837] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.267 [2024-10-08 18:31:02.570842] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.267 [2024-10-08 18:31:02.570845] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.570849] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.267 [2024-10-08 18:31:02.570857] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.570863] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.570866] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.267 [2024-10-08 18:31:02.570871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.267 [2024-10-08 18:31:02.570880] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.267 [2024-10-08 18:31:02.570946] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.267 [2024-10-08 18:31:02.570952] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.267 [2024-10-08 18:31:02.570954] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.570958] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.267 [2024-10-08 18:31:02.570966] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.570970] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.570973] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.267 [2024-10-08 18:31:02.570979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.267 [2024-10-08 18:31:02.570988] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.267 [2024-10-08 18:31:02.571060] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.267 [2024-10-08 18:31:02.571065] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.267 [2024-10-08 18:31:02.571068] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.571071] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.267 [2024-10-08 18:31:02.571080] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.571084] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.571087] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.267 [2024-10-08 18:31:02.571092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.267 [2024-10-08 18:31:02.571102] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.267 [2024-10-08 18:31:02.571162] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.267 [2024-10-08 18:31:02.571168] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.267 [2024-10-08 18:31:02.571171] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.571174] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.267 [2024-10-08 18:31:02.571182] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.571185] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.571188] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.267 [2024-10-08 18:31:02.571194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.267 [2024-10-08 18:31:02.571203] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.267 [2024-10-08 18:31:02.571261] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.267 [2024-10-08 18:31:02.571267] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.267 [2024-10-08 18:31:02.571270] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.571273] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.267 [2024-10-08 18:31:02.571282] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.571285] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.571290] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.267 [2024-10-08 18:31:02.571296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.267 [2024-10-08 18:31:02.571306] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.267 [2024-10-08 18:31:02.571372] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.267 [2024-10-08 18:31:02.575385] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.267 [2024-10-08 18:31:02.575390] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.575394] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.267 [2024-10-08 18:31:02.575404] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.575408] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.575411] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed0760) 00:23:09.267 [2024-10-08 18:31:02.575417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.267 [2024-10-08 18:31:02.575428] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf30900, cid 3, qid 0 00:23:09.267 [2024-10-08 18:31:02.575574] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.267 [2024-10-08 18:31:02.575580] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.267 [2024-10-08 18:31:02.575583] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.267 [2024-10-08 18:31:02.575586] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf30900) on tqpair=0xed0760 00:23:09.267 [2024-10-08 18:31:02.575593] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:23:09.530 00:23:09.530 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:09.530 [2024-10-08 18:31:02.612335] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:23:09.530 [2024-10-08 18:31:02.612381] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid502604 ] 00:23:09.530 [2024-10-08 18:31:02.639510] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:09.530 [2024-10-08 18:31:02.639552] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:09.530 [2024-10-08 18:31:02.639557] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:09.530 [2024-10-08 18:31:02.639569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:09.530 [2024-10-08 18:31:02.639576] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:09.530 [2024-10-08 18:31:02.639989] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:09.530 [2024-10-08 18:31:02.640018] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb69760 0 00:23:09.530 [2024-10-08 18:31:02.646393] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:09.530 [2024-10-08 18:31:02.646406] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:09.530 [2024-10-08 18:31:02.646410] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:09.530 [2024-10-08 18:31:02.646417] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:09.530 [2024-10-08 18:31:02.646440] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.530 [2024-10-08 18:31:02.646445] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.530 [2024-10-08 18:31:02.646449] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69760) 00:23:09.530 [2024-10-08 18:31:02.646459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:09.530 [2024-10-08 18:31:02.646477] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9480, cid 0, qid 0 00:23:09.530 [2024-10-08 18:31:02.654385] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.530 [2024-10-08 18:31:02.654393] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.530 [2024-10-08 18:31:02.654397] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.530 [2024-10-08 18:31:02.654400] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9480) on tqpair=0xb69760 00:23:09.530 [2024-10-08 18:31:02.654409] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:09.530 [2024-10-08 18:31:02.654415] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:09.530 [2024-10-08 18:31:02.654420] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:09.530 [2024-10-08 18:31:02.654430] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.530 [2024-10-08 18:31:02.654434] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.530 [2024-10-08 18:31:02.654437] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69760) 00:23:09.530 [2024-10-08 18:31:02.654444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.530 [2024-10-08 18:31:02.654457] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9480, cid 0, qid 0 00:23:09.530 [2024-10-08 18:31:02.654663] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.530 [2024-10-08 18:31:02.654669] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.530 [2024-10-08 18:31:02.654672] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.530 [2024-10-08 18:31:02.654676] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9480) on tqpair=0xb69760 00:23:09.530 [2024-10-08 18:31:02.654680] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:09.530 [2024-10-08 18:31:02.654686] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:09.530 [2024-10-08 18:31:02.654693] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.530 [2024-10-08 18:31:02.654696] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.530 [2024-10-08 18:31:02.654699] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69760) 00:23:09.530 [2024-10-08 18:31:02.654705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.530 [2024-10-08 18:31:02.654715] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9480, cid 0, qid 0 00:23:09.530 [2024-10-08 18:31:02.654777] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.530 [2024-10-08 18:31:02.654782] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.530 [2024-10-08 18:31:02.654786] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.530 [2024-10-08 18:31:02.654789] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9480) on tqpair=0xb69760 00:23:09.530 [2024-10-08 18:31:02.654794] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:09.530 [2024-10-08 18:31:02.654801] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:09.530 [2024-10-08 18:31:02.654809] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.530 [2024-10-08 18:31:02.654812] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.530 [2024-10-08 18:31:02.654815] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69760) 00:23:09.530 [2024-10-08 18:31:02.654821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.530 [2024-10-08 18:31:02.654831] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9480, cid 0, qid 0 00:23:09.530 [2024-10-08 18:31:02.654893] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.530 [2024-10-08 18:31:02.654898] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.530 [2024-10-08 18:31:02.654902] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.530 [2024-10-08 18:31:02.654905] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9480) on tqpair=0xb69760 00:23:09.530 [2024-10-08 18:31:02.654909] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:09.530 [2024-10-08 18:31:02.654918] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.530 [2024-10-08 18:31:02.654921] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.530 [2024-10-08 18:31:02.654925] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69760) 00:23:09.531 [2024-10-08 18:31:02.654930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.531 [2024-10-08 18:31:02.654939] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9480, cid 0, qid 0 00:23:09.531 [2024-10-08 18:31:02.654999] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.531 [2024-10-08 18:31:02.655004] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.531 [2024-10-08 18:31:02.655008] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.655011] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9480) on tqpair=0xb69760 00:23:09.531 [2024-10-08 18:31:02.655015] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:09.531 [2024-10-08 18:31:02.655019] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:09.531 [2024-10-08 18:31:02.655026] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:09.531 [2024-10-08 18:31:02.655131] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:09.531 [2024-10-08 18:31:02.655135] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:09.531 [2024-10-08 18:31:02.655141] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.655145] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.655148] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69760) 00:23:09.531 [2024-10-08 18:31:02.655153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.531 [2024-10-08 18:31:02.655163] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9480, cid 0, qid 0 00:23:09.531 [2024-10-08 18:31:02.655223] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.531 [2024-10-08 18:31:02.655229] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.531 [2024-10-08 18:31:02.655232] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.655235] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9480) on tqpair=0xb69760 00:23:09.531 [2024-10-08 18:31:02.655239] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:09.531 [2024-10-08 18:31:02.655250] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.655253] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.655257] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69760) 00:23:09.531 [2024-10-08 18:31:02.655262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.531 [2024-10-08 18:31:02.655273] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9480, cid 0, qid 0 00:23:09.531 [2024-10-08 18:31:02.655341] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.531 [2024-10-08 18:31:02.655347] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.531 [2024-10-08 18:31:02.655350] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.655353] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9480) on tqpair=0xb69760 00:23:09.531 [2024-10-08 18:31:02.655357] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:09.531 [2024-10-08 18:31:02.655362] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:09.531 [2024-10-08 18:31:02.655368] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:09.531 [2024-10-08 18:31:02.655387] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:09.531 [2024-10-08 18:31:02.655395] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.655398] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69760) 00:23:09.531 [2024-10-08 18:31:02.655404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.531 [2024-10-08 18:31:02.655414] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9480, cid 0, qid 0 00:23:09.531 [2024-10-08 18:31:02.655510] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:09.531 [2024-10-08 18:31:02.655516] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:09.531 [2024-10-08 18:31:02.655519] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.655522] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb69760): datao=0, datal=4096, cccid=0 00:23:09.531 [2024-10-08 18:31:02.655527] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbc9480) on tqpair(0xb69760): expected_datao=0, payload_size=4096 00:23:09.531 [2024-10-08 18:31:02.655530] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.655543] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.655547] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.697385] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.531 [2024-10-08 18:31:02.697395] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.531 [2024-10-08 18:31:02.697398] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.697401] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9480) on tqpair=0xb69760 00:23:09.531 [2024-10-08 18:31:02.697408] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:09.531 [2024-10-08 18:31:02.697412] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:09.531 [2024-10-08 18:31:02.697416] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:09.531 [2024-10-08 18:31:02.697420] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:09.531 [2024-10-08 18:31:02.697424] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:09.531 [2024-10-08 18:31:02.697431] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:09.531 [2024-10-08 18:31:02.697440] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:09.531 [2024-10-08 18:31:02.697448] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.697452] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.697456] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69760) 00:23:09.531 [2024-10-08 18:31:02.697462] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:09.531 [2024-10-08 18:31:02.697475] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9480, cid 0, qid 0 00:23:09.531 [2024-10-08 18:31:02.697539] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.531 [2024-10-08 18:31:02.697545] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.531 [2024-10-08 18:31:02.697549] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.697552] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9480) on tqpair=0xb69760 00:23:09.531 [2024-10-08 18:31:02.697557] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.697561] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.697564] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69760) 00:23:09.531 [2024-10-08 18:31:02.697569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.531 [2024-10-08 18:31:02.697575] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.697578] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.697581] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb69760) 00:23:09.531 [2024-10-08 18:31:02.697586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.531 [2024-10-08 18:31:02.697591] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.697595] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.697598] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb69760) 00:23:09.531 [2024-10-08 18:31:02.697603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.531 [2024-10-08 18:31:02.697608] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.697611] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.697614] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69760) 00:23:09.531 [2024-10-08 18:31:02.697619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.531 [2024-10-08 18:31:02.697624] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:09.531 [2024-10-08 18:31:02.697634] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:09.531 [2024-10-08 18:31:02.697640] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.697643] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb69760) 00:23:09.531 [2024-10-08 18:31:02.697648] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.531 [2024-10-08 18:31:02.697660] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9480, cid 0, qid 0 00:23:09.531 [2024-10-08 18:31:02.697666] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9600, cid 1, qid 0 00:23:09.531 [2024-10-08 18:31:02.697670] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9780, cid 2, qid 0 00:23:09.531 [2024-10-08 18:31:02.697675] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9900, cid 3, qid 0 00:23:09.531 [2024-10-08 18:31:02.697679] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9a80, cid 4, qid 0 00:23:09.531 [2024-10-08 18:31:02.697777] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.531 [2024-10-08 18:31:02.697782] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.531 [2024-10-08 18:31:02.697785] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.697789] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9a80) on tqpair=0xb69760 00:23:09.531 [2024-10-08 18:31:02.697793] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:09.531 [2024-10-08 18:31:02.697797] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:09.531 [2024-10-08 18:31:02.697806] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:09.531 [2024-10-08 18:31:02.697812] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:09.531 [2024-10-08 18:31:02.697818] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.697821] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.531 [2024-10-08 18:31:02.697824] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb69760) 00:23:09.532 [2024-10-08 18:31:02.697830] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:09.532 [2024-10-08 18:31:02.697840] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9a80, cid 4, qid 0 00:23:09.532 [2024-10-08 18:31:02.697899] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.532 [2024-10-08 18:31:02.697905] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.532 [2024-10-08 18:31:02.697908] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.697911] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9a80) on tqpair=0xb69760 00:23:09.532 [2024-10-08 18:31:02.697961] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:09.532 [2024-10-08 18:31:02.697971] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:09.532 [2024-10-08 18:31:02.697978] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.697981] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb69760) 00:23:09.532 [2024-10-08 18:31:02.697987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.532 [2024-10-08 18:31:02.697997] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9a80, cid 4, qid 0 00:23:09.532 [2024-10-08 18:31:02.698089] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:09.532 [2024-10-08 18:31:02.698095] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:09.532 [2024-10-08 18:31:02.698098] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698101] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb69760): datao=0, datal=4096, cccid=4 00:23:09.532 [2024-10-08 18:31:02.698105] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbc9a80) on tqpair(0xb69760): expected_datao=0, payload_size=4096 00:23:09.532 [2024-10-08 18:31:02.698112] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698119] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698122] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698130] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.532 [2024-10-08 18:31:02.698135] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.532 [2024-10-08 18:31:02.698138] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698142] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9a80) on tqpair=0xb69760 00:23:09.532 [2024-10-08 18:31:02.698152] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:09.532 [2024-10-08 18:31:02.698166] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:09.532 [2024-10-08 18:31:02.698176] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:09.532 [2024-10-08 18:31:02.698182] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698186] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb69760) 00:23:09.532 [2024-10-08 18:31:02.698191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.532 [2024-10-08 18:31:02.698202] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9a80, cid 4, qid 0 00:23:09.532 [2024-10-08 18:31:02.698287] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:09.532 [2024-10-08 18:31:02.698293] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:09.532 [2024-10-08 18:31:02.698296] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698300] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb69760): datao=0, datal=4096, cccid=4 00:23:09.532 [2024-10-08 18:31:02.698304] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbc9a80) on tqpair(0xb69760): expected_datao=0, payload_size=4096 00:23:09.532 [2024-10-08 18:31:02.698307] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698313] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698317] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698326] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.532 [2024-10-08 18:31:02.698331] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.532 [2024-10-08 18:31:02.698334] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698337] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9a80) on tqpair=0xb69760 00:23:09.532 [2024-10-08 18:31:02.698346] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:09.532 [2024-10-08 18:31:02.698355] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:09.532 [2024-10-08 18:31:02.698361] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698365] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb69760) 00:23:09.532 [2024-10-08 18:31:02.698370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.532 [2024-10-08 18:31:02.698386] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9a80, cid 4, qid 0 00:23:09.532 [2024-10-08 18:31:02.698463] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:09.532 [2024-10-08 18:31:02.698469] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:09.532 [2024-10-08 18:31:02.698472] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698477] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb69760): datao=0, datal=4096, cccid=4 00:23:09.532 [2024-10-08 18:31:02.698481] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbc9a80) on tqpair(0xb69760): expected_datao=0, payload_size=4096 00:23:09.532 [2024-10-08 18:31:02.698485] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698491] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698495] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698505] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.532 [2024-10-08 18:31:02.698510] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.532 [2024-10-08 18:31:02.698513] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698517] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9a80) on tqpair=0xb69760 00:23:09.532 [2024-10-08 18:31:02.698526] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:09.532 [2024-10-08 18:31:02.698534] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:09.532 [2024-10-08 18:31:02.698541] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:09.532 [2024-10-08 18:31:02.698547] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:09.532 [2024-10-08 18:31:02.698551] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:09.532 [2024-10-08 18:31:02.698556] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:09.532 [2024-10-08 18:31:02.698561] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:09.532 [2024-10-08 18:31:02.698565] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:09.532 [2024-10-08 18:31:02.698569] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:09.532 [2024-10-08 18:31:02.698581] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698585] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb69760) 00:23:09.532 [2024-10-08 18:31:02.698591] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.532 [2024-10-08 18:31:02.698597] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698600] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698604] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb69760) 00:23:09.532 [2024-10-08 18:31:02.698609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.532 [2024-10-08 18:31:02.698621] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9a80, cid 4, qid 0 00:23:09.532 [2024-10-08 18:31:02.698625] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9c00, cid 5, qid 0 00:23:09.532 [2024-10-08 18:31:02.698706] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.532 [2024-10-08 18:31:02.698712] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.532 [2024-10-08 18:31:02.698715] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698718] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9a80) on tqpair=0xb69760 00:23:09.532 [2024-10-08 18:31:02.698724] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.532 [2024-10-08 18:31:02.698729] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.532 [2024-10-08 18:31:02.698734] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698737] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9c00) on tqpair=0xb69760 00:23:09.532 [2024-10-08 18:31:02.698745] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698749] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb69760) 00:23:09.532 [2024-10-08 18:31:02.698754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.532 [2024-10-08 18:31:02.698764] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9c00, cid 5, qid 0 00:23:09.532 [2024-10-08 18:31:02.698826] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.532 [2024-10-08 18:31:02.698831] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.532 [2024-10-08 18:31:02.698834] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698838] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9c00) on tqpair=0xb69760 00:23:09.532 [2024-10-08 18:31:02.698846] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698850] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb69760) 00:23:09.532 [2024-10-08 18:31:02.698855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.532 [2024-10-08 18:31:02.698864] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9c00, cid 5, qid 0 00:23:09.532 [2024-10-08 18:31:02.698924] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.532 [2024-10-08 18:31:02.698930] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.532 [2024-10-08 18:31:02.698933] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698937] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9c00) on tqpair=0xb69760 00:23:09.532 [2024-10-08 18:31:02.698945] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.532 [2024-10-08 18:31:02.698948] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb69760) 00:23:09.532 [2024-10-08 18:31:02.698954] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.533 [2024-10-08 18:31:02.698963] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9c00, cid 5, qid 0 00:23:09.533 [2024-10-08 18:31:02.699020] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.533 [2024-10-08 18:31:02.699025] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.533 [2024-10-08 18:31:02.699029] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699032] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9c00) on tqpair=0xb69760 00:23:09.533 [2024-10-08 18:31:02.699045] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699049] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb69760) 00:23:09.533 [2024-10-08 18:31:02.699054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.533 [2024-10-08 18:31:02.699061] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699064] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb69760) 00:23:09.533 [2024-10-08 18:31:02.699069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.533 [2024-10-08 18:31:02.699075] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699079] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb69760) 00:23:09.533 [2024-10-08 18:31:02.699088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.533 [2024-10-08 18:31:02.699094] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699098] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb69760) 00:23:09.533 [2024-10-08 18:31:02.699103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.533 [2024-10-08 18:31:02.699114] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9c00, cid 5, qid 0 00:23:09.533 [2024-10-08 18:31:02.699119] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9a80, cid 4, qid 0 00:23:09.533 [2024-10-08 18:31:02.699123] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9d80, cid 6, qid 0 00:23:09.533 [2024-10-08 18:31:02.699127] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9f00, cid 7, qid 0 00:23:09.533 [2024-10-08 18:31:02.699263] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:09.533 [2024-10-08 18:31:02.699269] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:09.533 [2024-10-08 18:31:02.699272] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699275] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb69760): datao=0, datal=8192, cccid=5 00:23:09.533 [2024-10-08 18:31:02.699279] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbc9c00) on tqpair(0xb69760): expected_datao=0, payload_size=8192 00:23:09.533 [2024-10-08 18:31:02.699283] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699297] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699301] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699308] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:09.533 [2024-10-08 18:31:02.699313] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:09.533 [2024-10-08 18:31:02.699317] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699320] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb69760): datao=0, datal=512, cccid=4 00:23:09.533 [2024-10-08 18:31:02.699324] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbc9a80) on tqpair(0xb69760): expected_datao=0, payload_size=512 00:23:09.533 [2024-10-08 18:31:02.699328] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699333] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699336] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699341] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:09.533 [2024-10-08 18:31:02.699346] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:09.533 [2024-10-08 18:31:02.699349] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699352] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb69760): datao=0, datal=512, cccid=6 00:23:09.533 [2024-10-08 18:31:02.699356] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbc9d80) on tqpair(0xb69760): expected_datao=0, payload_size=512 00:23:09.533 [2024-10-08 18:31:02.699360] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699366] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699369] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699374] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:09.533 [2024-10-08 18:31:02.699385] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:09.533 [2024-10-08 18:31:02.699388] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699392] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb69760): datao=0, datal=4096, cccid=7 00:23:09.533 [2024-10-08 18:31:02.699397] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbc9f00) on tqpair(0xb69760): expected_datao=0, payload_size=4096 00:23:09.533 [2024-10-08 18:31:02.699401] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699407] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699410] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699418] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.533 [2024-10-08 18:31:02.699423] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.533 [2024-10-08 18:31:02.699426] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699429] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9c00) on tqpair=0xb69760 00:23:09.533 [2024-10-08 18:31:02.699440] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.533 [2024-10-08 18:31:02.699445] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.533 [2024-10-08 18:31:02.699448] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699452] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9a80) on tqpair=0xb69760 00:23:09.533 [2024-10-08 18:31:02.699461] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.533 [2024-10-08 18:31:02.699466] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.533 [2024-10-08 18:31:02.699469] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699472] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9d80) on tqpair=0xb69760 00:23:09.533 [2024-10-08 18:31:02.699478] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.533 [2024-10-08 18:31:02.699483] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.533 [2024-10-08 18:31:02.699487] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.533 [2024-10-08 18:31:02.699490] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9f00) on tqpair=0xb69760 00:23:09.533 ===================================================== 00:23:09.533 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:09.533 ===================================================== 00:23:09.533 Controller Capabilities/Features 00:23:09.533 ================================ 00:23:09.533 Vendor ID: 8086 00:23:09.533 Subsystem Vendor ID: 8086 00:23:09.533 Serial Number: SPDK00000000000001 00:23:09.533 Model Number: SPDK bdev Controller 00:23:09.533 Firmware Version: 25.01 00:23:09.533 Recommended Arb Burst: 6 00:23:09.533 IEEE OUI Identifier: e4 d2 5c 00:23:09.533 Multi-path I/O 00:23:09.533 May have multiple subsystem ports: Yes 00:23:09.533 May have multiple controllers: Yes 00:23:09.533 Associated with SR-IOV VF: No 00:23:09.533 Max Data Transfer Size: 131072 00:23:09.533 Max Number of Namespaces: 32 00:23:09.533 Max Number of I/O Queues: 127 00:23:09.533 NVMe Specification Version (VS): 1.3 00:23:09.533 NVMe Specification Version (Identify): 1.3 00:23:09.533 Maximum Queue Entries: 128 00:23:09.533 Contiguous Queues Required: Yes 00:23:09.533 Arbitration Mechanisms Supported 00:23:09.533 Weighted Round Robin: Not Supported 00:23:09.533 Vendor Specific: Not Supported 00:23:09.533 Reset Timeout: 15000 ms 00:23:09.533 Doorbell Stride: 4 bytes 00:23:09.533 NVM Subsystem Reset: Not Supported 00:23:09.533 Command Sets Supported 00:23:09.533 NVM Command Set: Supported 00:23:09.533 Boot Partition: Not Supported 00:23:09.533 Memory Page Size Minimum: 4096 bytes 00:23:09.533 Memory Page Size Maximum: 4096 bytes 00:23:09.533 Persistent Memory Region: Not Supported 00:23:09.533 Optional Asynchronous Events Supported 00:23:09.533 Namespace Attribute Notices: Supported 00:23:09.533 Firmware Activation Notices: Not Supported 00:23:09.533 ANA Change Notices: Not Supported 00:23:09.533 PLE Aggregate Log Change Notices: Not Supported 00:23:09.533 LBA Status Info Alert Notices: Not Supported 00:23:09.533 EGE Aggregate Log Change Notices: Not Supported 00:23:09.533 Normal NVM Subsystem Shutdown event: Not Supported 00:23:09.533 Zone Descriptor Change Notices: Not Supported 00:23:09.533 Discovery Log Change Notices: Not Supported 00:23:09.533 Controller Attributes 00:23:09.533 128-bit Host Identifier: Supported 00:23:09.533 Non-Operational Permissive Mode: Not Supported 00:23:09.533 NVM Sets: Not Supported 00:23:09.533 Read Recovery Levels: Not Supported 00:23:09.533 Endurance Groups: Not Supported 00:23:09.533 Predictable Latency Mode: Not Supported 00:23:09.533 Traffic Based Keep ALive: Not Supported 00:23:09.533 Namespace Granularity: Not Supported 00:23:09.533 SQ Associations: Not Supported 00:23:09.533 UUID List: Not Supported 00:23:09.533 Multi-Domain Subsystem: Not Supported 00:23:09.533 Fixed Capacity Management: Not Supported 00:23:09.533 Variable Capacity Management: Not Supported 00:23:09.533 Delete Endurance Group: Not Supported 00:23:09.533 Delete NVM Set: Not Supported 00:23:09.533 Extended LBA Formats Supported: Not Supported 00:23:09.533 Flexible Data Placement Supported: Not Supported 00:23:09.533 00:23:09.533 Controller Memory Buffer Support 00:23:09.533 ================================ 00:23:09.533 Supported: No 00:23:09.533 00:23:09.533 Persistent Memory Region Support 00:23:09.533 ================================ 00:23:09.533 Supported: No 00:23:09.533 00:23:09.533 Admin Command Set Attributes 00:23:09.533 ============================ 00:23:09.533 Security Send/Receive: Not Supported 00:23:09.533 Format NVM: Not Supported 00:23:09.534 Firmware Activate/Download: Not Supported 00:23:09.534 Namespace Management: Not Supported 00:23:09.534 Device Self-Test: Not Supported 00:23:09.534 Directives: Not Supported 00:23:09.534 NVMe-MI: Not Supported 00:23:09.534 Virtualization Management: Not Supported 00:23:09.534 Doorbell Buffer Config: Not Supported 00:23:09.534 Get LBA Status Capability: Not Supported 00:23:09.534 Command & Feature Lockdown Capability: Not Supported 00:23:09.534 Abort Command Limit: 4 00:23:09.534 Async Event Request Limit: 4 00:23:09.534 Number of Firmware Slots: N/A 00:23:09.534 Firmware Slot 1 Read-Only: N/A 00:23:09.534 Firmware Activation Without Reset: N/A 00:23:09.534 Multiple Update Detection Support: N/A 00:23:09.534 Firmware Update Granularity: No Information Provided 00:23:09.534 Per-Namespace SMART Log: No 00:23:09.534 Asymmetric Namespace Access Log Page: Not Supported 00:23:09.534 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:09.534 Command Effects Log Page: Supported 00:23:09.534 Get Log Page Extended Data: Supported 00:23:09.534 Telemetry Log Pages: Not Supported 00:23:09.534 Persistent Event Log Pages: Not Supported 00:23:09.534 Supported Log Pages Log Page: May Support 00:23:09.534 Commands Supported & Effects Log Page: Not Supported 00:23:09.534 Feature Identifiers & Effects Log Page:May Support 00:23:09.534 NVMe-MI Commands & Effects Log Page: May Support 00:23:09.534 Data Area 4 for Telemetry Log: Not Supported 00:23:09.534 Error Log Page Entries Supported: 128 00:23:09.534 Keep Alive: Supported 00:23:09.534 Keep Alive Granularity: 10000 ms 00:23:09.534 00:23:09.534 NVM Command Set Attributes 00:23:09.534 ========================== 00:23:09.534 Submission Queue Entry Size 00:23:09.534 Max: 64 00:23:09.534 Min: 64 00:23:09.534 Completion Queue Entry Size 00:23:09.534 Max: 16 00:23:09.534 Min: 16 00:23:09.534 Number of Namespaces: 32 00:23:09.534 Compare Command: Supported 00:23:09.534 Write Uncorrectable Command: Not Supported 00:23:09.534 Dataset Management Command: Supported 00:23:09.534 Write Zeroes Command: Supported 00:23:09.534 Set Features Save Field: Not Supported 00:23:09.534 Reservations: Supported 00:23:09.534 Timestamp: Not Supported 00:23:09.534 Copy: Supported 00:23:09.534 Volatile Write Cache: Present 00:23:09.534 Atomic Write Unit (Normal): 1 00:23:09.534 Atomic Write Unit (PFail): 1 00:23:09.534 Atomic Compare & Write Unit: 1 00:23:09.534 Fused Compare & Write: Supported 00:23:09.534 Scatter-Gather List 00:23:09.534 SGL Command Set: Supported 00:23:09.534 SGL Keyed: Supported 00:23:09.534 SGL Bit Bucket Descriptor: Not Supported 00:23:09.534 SGL Metadata Pointer: Not Supported 00:23:09.534 Oversized SGL: Not Supported 00:23:09.534 SGL Metadata Address: Not Supported 00:23:09.534 SGL Offset: Supported 00:23:09.534 Transport SGL Data Block: Not Supported 00:23:09.534 Replay Protected Memory Block: Not Supported 00:23:09.534 00:23:09.534 Firmware Slot Information 00:23:09.534 ========================= 00:23:09.534 Active slot: 1 00:23:09.534 Slot 1 Firmware Revision: 25.01 00:23:09.534 00:23:09.534 00:23:09.534 Commands Supported and Effects 00:23:09.534 ============================== 00:23:09.534 Admin Commands 00:23:09.534 -------------- 00:23:09.534 Get Log Page (02h): Supported 00:23:09.534 Identify (06h): Supported 00:23:09.534 Abort (08h): Supported 00:23:09.534 Set Features (09h): Supported 00:23:09.534 Get Features (0Ah): Supported 00:23:09.534 Asynchronous Event Request (0Ch): Supported 00:23:09.534 Keep Alive (18h): Supported 00:23:09.534 I/O Commands 00:23:09.534 ------------ 00:23:09.534 Flush (00h): Supported LBA-Change 00:23:09.534 Write (01h): Supported LBA-Change 00:23:09.534 Read (02h): Supported 00:23:09.534 Compare (05h): Supported 00:23:09.534 Write Zeroes (08h): Supported LBA-Change 00:23:09.534 Dataset Management (09h): Supported LBA-Change 00:23:09.534 Copy (19h): Supported LBA-Change 00:23:09.534 00:23:09.534 Error Log 00:23:09.534 ========= 00:23:09.534 00:23:09.534 Arbitration 00:23:09.534 =========== 00:23:09.534 Arbitration Burst: 1 00:23:09.534 00:23:09.534 Power Management 00:23:09.534 ================ 00:23:09.534 Number of Power States: 1 00:23:09.534 Current Power State: Power State #0 00:23:09.534 Power State #0: 00:23:09.534 Max Power: 0.00 W 00:23:09.534 Non-Operational State: Operational 00:23:09.534 Entry Latency: Not Reported 00:23:09.534 Exit Latency: Not Reported 00:23:09.534 Relative Read Throughput: 0 00:23:09.534 Relative Read Latency: 0 00:23:09.534 Relative Write Throughput: 0 00:23:09.534 Relative Write Latency: 0 00:23:09.534 Idle Power: Not Reported 00:23:09.534 Active Power: Not Reported 00:23:09.534 Non-Operational Permissive Mode: Not Supported 00:23:09.534 00:23:09.534 Health Information 00:23:09.534 ================== 00:23:09.534 Critical Warnings: 00:23:09.534 Available Spare Space: OK 00:23:09.534 Temperature: OK 00:23:09.534 Device Reliability: OK 00:23:09.534 Read Only: No 00:23:09.534 Volatile Memory Backup: OK 00:23:09.534 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:09.534 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:09.534 Available Spare: 0% 00:23:09.534 Available Spare Threshold: 0% 00:23:09.534 Life Percentage Used:[2024-10-08 18:31:02.699574] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.534 [2024-10-08 18:31:02.699578] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb69760) 00:23:09.534 [2024-10-08 18:31:02.699584] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.534 [2024-10-08 18:31:02.699596] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9f00, cid 7, qid 0 00:23:09.534 [2024-10-08 18:31:02.699681] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.534 [2024-10-08 18:31:02.699687] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.534 [2024-10-08 18:31:02.699690] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.534 [2024-10-08 18:31:02.699693] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9f00) on tqpair=0xb69760 00:23:09.534 [2024-10-08 18:31:02.699723] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:09.534 [2024-10-08 18:31:02.699733] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9480) on tqpair=0xb69760 00:23:09.534 [2024-10-08 18:31:02.699738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.534 [2024-10-08 18:31:02.699743] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9600) on tqpair=0xb69760 00:23:09.534 [2024-10-08 18:31:02.699747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.534 [2024-10-08 18:31:02.699752] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9780) on tqpair=0xb69760 00:23:09.534 [2024-10-08 18:31:02.699756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.534 [2024-10-08 18:31:02.699760] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9900) on tqpair=0xb69760 00:23:09.534 [2024-10-08 18:31:02.699766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.534 [2024-10-08 18:31:02.699773] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.534 [2024-10-08 18:31:02.699777] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.534 [2024-10-08 18:31:02.699780] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69760) 00:23:09.534 [2024-10-08 18:31:02.699786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.534 [2024-10-08 18:31:02.699797] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9900, cid 3, qid 0 00:23:09.534 [2024-10-08 18:31:02.699854] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.534 [2024-10-08 18:31:02.699860] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.534 [2024-10-08 18:31:02.699864] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.534 [2024-10-08 18:31:02.699867] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9900) on tqpair=0xb69760 00:23:09.535 [2024-10-08 18:31:02.699872] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.699876] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.699879] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69760) 00:23:09.535 [2024-10-08 18:31:02.699885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.535 [2024-10-08 18:31:02.699897] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9900, cid 3, qid 0 00:23:09.535 [2024-10-08 18:31:02.699972] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.535 [2024-10-08 18:31:02.699977] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.535 [2024-10-08 18:31:02.699981] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.699984] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9900) on tqpair=0xb69760 00:23:09.535 [2024-10-08 18:31:02.699988] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:09.535 [2024-10-08 18:31:02.699992] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:09.535 [2024-10-08 18:31:02.700000] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700003] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700007] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69760) 00:23:09.535 [2024-10-08 18:31:02.700013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.535 [2024-10-08 18:31:02.700021] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9900, cid 3, qid 0 00:23:09.535 [2024-10-08 18:31:02.700082] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.535 [2024-10-08 18:31:02.700088] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.535 [2024-10-08 18:31:02.700091] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700094] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9900) on tqpair=0xb69760 00:23:09.535 [2024-10-08 18:31:02.700104] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700107] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700111] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69760) 00:23:09.535 [2024-10-08 18:31:02.700116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.535 [2024-10-08 18:31:02.700126] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9900, cid 3, qid 0 00:23:09.535 [2024-10-08 18:31:02.700186] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.535 [2024-10-08 18:31:02.700194] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.535 [2024-10-08 18:31:02.700197] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700200] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9900) on tqpair=0xb69760 00:23:09.535 [2024-10-08 18:31:02.700208] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700212] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700215] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69760) 00:23:09.535 [2024-10-08 18:31:02.700221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.535 [2024-10-08 18:31:02.700230] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9900, cid 3, qid 0 00:23:09.535 [2024-10-08 18:31:02.700304] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.535 [2024-10-08 18:31:02.700310] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.535 [2024-10-08 18:31:02.700313] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700316] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9900) on tqpair=0xb69760 00:23:09.535 [2024-10-08 18:31:02.700324] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700328] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700331] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69760) 00:23:09.535 [2024-10-08 18:31:02.700337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.535 [2024-10-08 18:31:02.700346] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9900, cid 3, qid 0 00:23:09.535 [2024-10-08 18:31:02.700421] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.535 [2024-10-08 18:31:02.700427] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.535 [2024-10-08 18:31:02.700431] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700434] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9900) on tqpair=0xb69760 00:23:09.535 [2024-10-08 18:31:02.700442] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700446] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700449] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69760) 00:23:09.535 [2024-10-08 18:31:02.700455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.535 [2024-10-08 18:31:02.700464] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9900, cid 3, qid 0 00:23:09.535 [2024-10-08 18:31:02.700529] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.535 [2024-10-08 18:31:02.700535] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.535 [2024-10-08 18:31:02.700538] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700541] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9900) on tqpair=0xb69760 00:23:09.535 [2024-10-08 18:31:02.700550] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700554] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700557] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69760) 00:23:09.535 [2024-10-08 18:31:02.700562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.535 [2024-10-08 18:31:02.700572] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9900, cid 3, qid 0 00:23:09.535 [2024-10-08 18:31:02.700632] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.535 [2024-10-08 18:31:02.700637] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.535 [2024-10-08 18:31:02.700642] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700646] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9900) on tqpair=0xb69760 00:23:09.535 [2024-10-08 18:31:02.700654] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700657] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700661] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69760) 00:23:09.535 [2024-10-08 18:31:02.700666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.535 [2024-10-08 18:31:02.700676] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9900, cid 3, qid 0 00:23:09.535 [2024-10-08 18:31:02.700731] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.535 [2024-10-08 18:31:02.700737] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.535 [2024-10-08 18:31:02.700740] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700744] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9900) on tqpair=0xb69760 00:23:09.535 [2024-10-08 18:31:02.700752] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700755] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700758] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69760) 00:23:09.535 [2024-10-08 18:31:02.700764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.535 [2024-10-08 18:31:02.700773] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9900, cid 3, qid 0 00:23:09.535 [2024-10-08 18:31:02.700831] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.535 [2024-10-08 18:31:02.700836] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.535 [2024-10-08 18:31:02.700840] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700843] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9900) on tqpair=0xb69760 00:23:09.535 [2024-10-08 18:31:02.700851] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700855] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700858] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69760) 00:23:09.535 [2024-10-08 18:31:02.700864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.535 [2024-10-08 18:31:02.700872] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9900, cid 3, qid 0 00:23:09.535 [2024-10-08 18:31:02.700934] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.535 [2024-10-08 18:31:02.700939] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.535 [2024-10-08 18:31:02.700943] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700946] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9900) on tqpair=0xb69760 00:23:09.535 [2024-10-08 18:31:02.700954] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700958] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.700961] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69760) 00:23:09.535 [2024-10-08 18:31:02.700966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.535 [2024-10-08 18:31:02.700976] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9900, cid 3, qid 0 00:23:09.535 [2024-10-08 18:31:02.701051] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.535 [2024-10-08 18:31:02.701057] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.535 [2024-10-08 18:31:02.701060] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.701063] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9900) on tqpair=0xb69760 00:23:09.535 [2024-10-08 18:31:02.701073] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.701076] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.701080] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69760) 00:23:09.535 [2024-10-08 18:31:02.701085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.535 [2024-10-08 18:31:02.701094] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9900, cid 3, qid 0 00:23:09.535 [2024-10-08 18:31:02.701169] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.535 [2024-10-08 18:31:02.701175] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.535 [2024-10-08 18:31:02.701178] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.701181] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9900) on tqpair=0xb69760 00:23:09.535 [2024-10-08 18:31:02.701189] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.701193] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.535 [2024-10-08 18:31:02.701196] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69760) 00:23:09.536 [2024-10-08 18:31:02.701201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.536 [2024-10-08 18:31:02.701210] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9900, cid 3, qid 0 00:23:09.536 [2024-10-08 18:31:02.701283] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.536 [2024-10-08 18:31:02.701289] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.536 [2024-10-08 18:31:02.701292] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.536 [2024-10-08 18:31:02.701296] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9900) on tqpair=0xb69760 00:23:09.536 [2024-10-08 18:31:02.701304] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.536 [2024-10-08 18:31:02.701307] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.536 [2024-10-08 18:31:02.701310] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69760) 00:23:09.536 [2024-10-08 18:31:02.701316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.536 [2024-10-08 18:31:02.701325] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9900, cid 3, qid 0 00:23:09.536 [2024-10-08 18:31:02.705385] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.536 [2024-10-08 18:31:02.705393] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.536 [2024-10-08 18:31:02.705397] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.536 [2024-10-08 18:31:02.705400] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9900) on tqpair=0xb69760 00:23:09.536 [2024-10-08 18:31:02.705409] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:09.536 [2024-10-08 18:31:02.705413] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:09.536 [2024-10-08 18:31:02.705416] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69760) 00:23:09.536 [2024-10-08 18:31:02.705422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.536 [2024-10-08 18:31:02.705433] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc9900, cid 3, qid 0 00:23:09.536 [2024-10-08 18:31:02.705583] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:09.536 [2024-10-08 18:31:02.705589] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:09.536 [2024-10-08 18:31:02.705592] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:09.536 [2024-10-08 18:31:02.705596] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbc9900) on tqpair=0xb69760 00:23:09.536 [2024-10-08 18:31:02.705602] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:23:09.536 0% 00:23:09.536 Data Units Read: 0 00:23:09.536 Data Units Written: 0 00:23:09.536 Host Read Commands: 0 00:23:09.536 Host Write Commands: 0 00:23:09.536 Controller Busy Time: 0 minutes 00:23:09.536 Power Cycles: 0 00:23:09.536 Power On Hours: 0 hours 00:23:09.536 Unsafe Shutdowns: 0 00:23:09.536 Unrecoverable Media Errors: 0 00:23:09.536 Lifetime Error Log Entries: 0 00:23:09.536 Warning Temperature Time: 0 minutes 00:23:09.536 Critical Temperature Time: 0 minutes 00:23:09.536 00:23:09.536 Number of Queues 00:23:09.536 ================ 00:23:09.536 Number of I/O Submission Queues: 127 00:23:09.536 Number of I/O Completion Queues: 127 00:23:09.536 00:23:09.536 Active Namespaces 00:23:09.536 ================= 00:23:09.536 Namespace ID:1 00:23:09.536 Error Recovery Timeout: Unlimited 00:23:09.536 Command Set Identifier: NVM (00h) 00:23:09.536 Deallocate: Supported 00:23:09.536 Deallocated/Unwritten Error: Not Supported 00:23:09.536 Deallocated Read Value: Unknown 00:23:09.536 Deallocate in Write Zeroes: Not Supported 00:23:09.536 Deallocated Guard Field: 0xFFFF 00:23:09.536 Flush: Supported 00:23:09.536 Reservation: Supported 00:23:09.536 Namespace Sharing Capabilities: Multiple Controllers 00:23:09.536 Size (in LBAs): 131072 (0GiB) 00:23:09.536 Capacity (in LBAs): 131072 (0GiB) 00:23:09.536 Utilization (in LBAs): 131072 (0GiB) 00:23:09.536 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:09.536 EUI64: ABCDEF0123456789 00:23:09.536 UUID: 86aa4e7d-b83c-45d2-9ef8-e026c9649f3e 00:23:09.536 Thin Provisioning: Not Supported 00:23:09.536 Per-NS Atomic Units: Yes 00:23:09.536 Atomic Boundary Size (Normal): 0 00:23:09.536 Atomic Boundary Size (PFail): 0 00:23:09.536 Atomic Boundary Offset: 0 00:23:09.536 Maximum Single Source Range Length: 65535 00:23:09.536 Maximum Copy Length: 65535 00:23:09.536 Maximum Source Range Count: 1 00:23:09.536 NGUID/EUI64 Never Reused: No 00:23:09.536 Namespace Write Protected: No 00:23:09.536 Number of LBA Formats: 1 00:23:09.536 Current LBA Format: LBA Format #00 00:23:09.536 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:09.536 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:09.536 rmmod nvme_tcp 00:23:09.536 rmmod nvme_fabrics 00:23:09.536 rmmod nvme_keyring 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 502366 ']' 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 502366 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 502366 ']' 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 502366 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.536 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 502366 00:23:09.796 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:09.796 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:09.796 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 502366' 00:23:09.796 killing process with pid 502366 00:23:09.796 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 502366 00:23:09.796 18:31:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 502366 00:23:09.796 18:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:09.796 18:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:09.796 18:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:09.796 18:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:09.796 18:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:23:09.796 18:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:09.796 18:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:23:09.796 18:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:09.796 18:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:09.796 18:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.796 18:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.796 18:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:12.387 00:23:12.387 real 0m9.758s 00:23:12.387 user 0m7.609s 00:23:12.387 sys 0m4.751s 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.387 ************************************ 00:23:12.387 END TEST nvmf_identify 00:23:12.387 ************************************ 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.387 ************************************ 00:23:12.387 START TEST nvmf_perf 00:23:12.387 ************************************ 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:12.387 * Looking for test storage... 00:23:12.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:12.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.387 --rc genhtml_branch_coverage=1 00:23:12.387 --rc genhtml_function_coverage=1 00:23:12.387 --rc genhtml_legend=1 00:23:12.387 --rc geninfo_all_blocks=1 00:23:12.387 --rc geninfo_unexecuted_blocks=1 00:23:12.387 00:23:12.387 ' 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:12.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.387 --rc genhtml_branch_coverage=1 00:23:12.387 --rc genhtml_function_coverage=1 00:23:12.387 --rc genhtml_legend=1 00:23:12.387 --rc geninfo_all_blocks=1 00:23:12.387 --rc geninfo_unexecuted_blocks=1 00:23:12.387 00:23:12.387 ' 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:12.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.387 --rc genhtml_branch_coverage=1 00:23:12.387 --rc genhtml_function_coverage=1 00:23:12.387 --rc genhtml_legend=1 00:23:12.387 --rc geninfo_all_blocks=1 00:23:12.387 --rc geninfo_unexecuted_blocks=1 00:23:12.387 00:23:12.387 ' 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:12.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.387 --rc genhtml_branch_coverage=1 00:23:12.387 --rc genhtml_function_coverage=1 00:23:12.387 --rc genhtml_legend=1 00:23:12.387 --rc geninfo_all_blocks=1 00:23:12.387 --rc geninfo_unexecuted_blocks=1 00:23:12.387 00:23:12.387 ' 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.387 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:12.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:12.388 18:31:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:18.959 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:18.959 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:18.959 Found net devices under 0000:86:00.0: cvl_0_0 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:18.959 Found net devices under 0000:86:00.1: cvl_0_1 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:18.959 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:18.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:23:18.960 00:23:18.960 --- 10.0.0.2 ping statistics --- 00:23:18.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.960 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:23:18.960 00:23:18.960 --- 10.0.0.1 ping statistics --- 00:23:18.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.960 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=506136 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 506136 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 506136 ']' 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.960 18:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:18.960 [2024-10-08 18:31:11.459823] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:23:18.960 [2024-10-08 18:31:11.459879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.960 [2024-10-08 18:31:11.531096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:18.960 [2024-10-08 18:31:11.611019] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.960 [2024-10-08 18:31:11.611057] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.960 [2024-10-08 18:31:11.611064] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.960 [2024-10-08 18:31:11.611070] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.960 [2024-10-08 18:31:11.611076] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.960 [2024-10-08 18:31:11.612686] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.960 [2024-10-08 18:31:11.612797] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.960 [2024-10-08 18:31:11.612903] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.960 [2024-10-08 18:31:11.612905] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:19.219 18:31:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.219 18:31:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:23:19.219 18:31:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:19.219 18:31:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:19.219 18:31:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:19.219 18:31:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.219 18:31:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:19.219 18:31:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:22.504 18:31:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:22.504 18:31:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:22.504 18:31:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:22.504 18:31:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:22.504 18:31:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:22.504 18:31:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:22.504 18:31:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:22.504 18:31:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:22.504 18:31:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:22.763 [2024-10-08 18:31:15.978261] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.763 18:31:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:23.020 18:31:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:23.021 18:31:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:23.278 18:31:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:23.278 18:31:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:23.536 18:31:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:23.536 [2024-10-08 18:31:16.778538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.536 18:31:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:23.794 18:31:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:23.794 18:31:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:23.794 18:31:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:23.794 18:31:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:25.172 Initializing NVMe Controllers 00:23:25.172 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:25.172 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:25.172 Initialization complete. Launching workers. 00:23:25.172 ======================================================== 00:23:25.172 Latency(us) 00:23:25.172 Device Information : IOPS MiB/s Average min max 00:23:25.172 PCIE (0000:5e:00.0) NSID 1 from core 0: 97722.37 381.73 327.08 34.61 4646.36 00:23:25.172 ======================================================== 00:23:25.172 Total : 97722.37 381.73 327.08 34.61 4646.36 00:23:25.172 00:23:25.172 18:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:26.548 Initializing NVMe Controllers 00:23:26.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:26.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:26.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:26.548 Initialization complete. Launching workers. 00:23:26.548 ======================================================== 00:23:26.548 Latency(us) 00:23:26.548 Device Information : IOPS MiB/s Average min max 00:23:26.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 444.00 1.73 2285.86 113.67 45791.94 00:23:26.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 41.00 0.16 25455.04 7210.40 47897.92 00:23:26.548 ======================================================== 00:23:26.548 Total : 485.00 1.89 4244.49 113.67 47897.92 00:23:26.548 00:23:26.548 18:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:27.483 Initializing NVMe Controllers 00:23:27.483 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:27.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:27.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:27.483 Initialization complete. Launching workers. 00:23:27.483 ======================================================== 00:23:27.483 Latency(us) 00:23:27.483 Device Information : IOPS MiB/s Average min max 00:23:27.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11166.31 43.62 2865.13 489.22 7762.97 00:23:27.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3840.35 15.00 8344.51 5081.12 15913.46 00:23:27.483 ======================================================== 00:23:27.483 Total : 15006.66 58.62 4267.36 489.22 15913.46 00:23:27.483 00:23:27.483 18:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:27.483 18:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:27.483 18:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:30.771 Initializing NVMe Controllers 00:23:30.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:30.771 Controller IO queue size 128, less than required. 00:23:30.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:30.771 Controller IO queue size 128, less than required. 00:23:30.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:30.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:30.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:30.771 Initialization complete. Launching workers. 00:23:30.771 ======================================================== 00:23:30.771 Latency(us) 00:23:30.771 Device Information : IOPS MiB/s Average min max 00:23:30.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1744.36 436.09 74656.36 54713.85 118338.80 00:23:30.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 589.95 147.49 225904.25 87335.48 319751.89 00:23:30.771 ======================================================== 00:23:30.771 Total : 2334.32 583.58 112881.36 54713.85 319751.89 00:23:30.771 00:23:30.771 18:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:30.771 No valid NVMe controllers or AIO or URING devices found 00:23:30.771 Initializing NVMe Controllers 00:23:30.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:30.771 Controller IO queue size 128, less than required. 00:23:30.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:30.771 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:30.771 Controller IO queue size 128, less than required. 00:23:30.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:30.771 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:30.771 WARNING: Some requested NVMe devices were skipped 00:23:30.771 18:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:33.364 Initializing NVMe Controllers 00:23:33.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:33.364 Controller IO queue size 128, less than required. 00:23:33.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:33.364 Controller IO queue size 128, less than required. 00:23:33.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:33.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:33.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:33.364 Initialization complete. Launching workers. 00:23:33.364 00:23:33.364 ==================== 00:23:33.364 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:33.364 TCP transport: 00:23:33.364 polls: 14689 00:23:33.364 idle_polls: 11263 00:23:33.364 sock_completions: 3426 00:23:33.364 nvme_completions: 6395 00:23:33.364 submitted_requests: 9576 00:23:33.364 queued_requests: 1 00:23:33.364 00:23:33.364 ==================== 00:23:33.364 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:33.364 TCP transport: 00:23:33.364 polls: 14670 00:23:33.364 idle_polls: 11394 00:23:33.364 sock_completions: 3276 00:23:33.364 nvme_completions: 6399 00:23:33.364 submitted_requests: 9606 00:23:33.364 queued_requests: 1 00:23:33.364 ======================================================== 00:23:33.364 Latency(us) 00:23:33.364 Device Information : IOPS MiB/s Average min max 00:23:33.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1597.36 399.34 82543.75 55336.21 135409.62 00:23:33.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1598.36 399.59 80484.51 41642.72 125722.78 00:23:33.364 ======================================================== 00:23:33.364 Total : 3195.71 798.93 81513.81 41642.72 135409.62 00:23:33.364 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:33.364 rmmod nvme_tcp 00:23:33.364 rmmod nvme_fabrics 00:23:33.364 rmmod nvme_keyring 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 506136 ']' 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 506136 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 506136 ']' 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 506136 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 506136 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 506136' 00:23:33.364 killing process with pid 506136 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 506136 00:23:33.364 18:31:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 506136 00:23:35.897 18:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:35.897 18:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:35.898 18:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:35.898 18:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:35.898 18:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:23:35.898 18:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:35.898 18:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:23:35.898 18:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:35.898 18:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:35.898 18:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.898 18:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:35.898 18:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:37.804 00:23:37.804 real 0m25.472s 00:23:37.804 user 1m7.542s 00:23:37.804 sys 0m8.490s 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:37.804 ************************************ 00:23:37.804 END TEST nvmf_perf 00:23:37.804 ************************************ 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.804 ************************************ 00:23:37.804 START TEST nvmf_fio_host 00:23:37.804 ************************************ 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:37.804 * Looking for test storage... 00:23:37.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:37.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.804 --rc genhtml_branch_coverage=1 00:23:37.804 --rc genhtml_function_coverage=1 00:23:37.804 --rc genhtml_legend=1 00:23:37.804 --rc geninfo_all_blocks=1 00:23:37.804 --rc geninfo_unexecuted_blocks=1 00:23:37.804 00:23:37.804 ' 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:37.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.804 --rc genhtml_branch_coverage=1 00:23:37.804 --rc genhtml_function_coverage=1 00:23:37.804 --rc genhtml_legend=1 00:23:37.804 --rc geninfo_all_blocks=1 00:23:37.804 --rc geninfo_unexecuted_blocks=1 00:23:37.804 00:23:37.804 ' 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:37.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.804 --rc genhtml_branch_coverage=1 00:23:37.804 --rc genhtml_function_coverage=1 00:23:37.804 --rc genhtml_legend=1 00:23:37.804 --rc geninfo_all_blocks=1 00:23:37.804 --rc geninfo_unexecuted_blocks=1 00:23:37.804 00:23:37.804 ' 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:37.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.804 --rc genhtml_branch_coverage=1 00:23:37.804 --rc genhtml_function_coverage=1 00:23:37.804 --rc genhtml_legend=1 00:23:37.804 --rc geninfo_all_blocks=1 00:23:37.804 --rc geninfo_unexecuted_blocks=1 00:23:37.804 00:23:37.804 ' 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:37.804 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:37.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:37.805 18:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:44.500 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:44.500 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:44.500 Found net devices under 0000:86:00.0: cvl_0_0 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:44.500 Found net devices under 0000:86:00.1: cvl_0_1 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:44.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:23:44.500 00:23:44.500 --- 10.0.0.2 ping statistics --- 00:23:44.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.500 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:23:44.500 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:23:44.500 00:23:44.500 --- 10.0.0.1 ping statistics --- 00:23:44.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.501 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=512468 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 512468 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 512468 ']' 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:44.501 18:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.501 [2024-10-08 18:31:36.995144] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:23:44.501 [2024-10-08 18:31:36.995188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.501 [2024-10-08 18:31:37.069326] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.501 [2024-10-08 18:31:37.147523] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.501 [2024-10-08 18:31:37.147563] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.501 [2024-10-08 18:31:37.147571] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.501 [2024-10-08 18:31:37.147578] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.501 [2024-10-08 18:31:37.147583] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.501 [2024-10-08 18:31:37.149194] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.501 [2024-10-08 18:31:37.149304] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.501 [2024-10-08 18:31:37.149423] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.501 [2024-10-08 18:31:37.149424] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:44.760 18:31:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:44.760 18:31:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:23:44.760 18:31:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:44.760 [2024-10-08 18:31:37.999827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.760 18:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:44.760 18:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:44.760 18:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.760 18:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:45.018 Malloc1 00:23:45.019 18:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:45.276 18:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:45.534 18:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:45.534 [2024-10-08 18:31:38.845786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.794 18:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:45.794 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:46.063 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:46.063 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:46.063 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:46.063 18:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:46.321 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:46.321 fio-3.35 00:23:46.321 Starting 1 thread 00:23:48.853 00:23:48.853 test: (groupid=0, jobs=1): err= 0: pid=513065: Tue Oct 8 18:31:41 2024 00:23:48.853 read: IOPS=11.8k, BW=46.1MiB/s (48.3MB/s)(92.4MiB/2005msec) 00:23:48.853 slat (nsec): min=1505, max=241574, avg=1734.62, stdev=2229.28 00:23:48.853 clat (usec): min=3165, max=10639, avg=5992.06, stdev=469.23 00:23:48.853 lat (usec): min=3202, max=10641, avg=5993.80, stdev=469.13 00:23:48.853 clat percentiles (usec): 00:23:48.853 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:23:48.853 | 30.00th=[ 5800], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6128], 00:23:48.853 | 70.00th=[ 6194], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6718], 00:23:48.853 | 99.00th=[ 7111], 99.50th=[ 7242], 99.90th=[ 8848], 99.95th=[ 9503], 00:23:48.853 | 99.99th=[10028] 00:23:48.853 bw ( KiB/s): min=46000, max=47912, per=99.96%, avg=47174.00, stdev=848.01, samples=4 00:23:48.853 iops : min=11500, max=11976, avg=11793.50, stdev=211.39, samples=4 00:23:48.853 write: IOPS=11.7k, BW=45.8MiB/s (48.1MB/s)(91.9MiB/2005msec); 0 zone resets 00:23:48.853 slat (nsec): min=1543, max=227786, avg=1793.41, stdev=1678.95 00:23:48.853 clat (usec): min=2421, max=9597, avg=4845.62, stdev=378.44 00:23:48.853 lat (usec): min=2436, max=9599, avg=4847.41, stdev=378.40 00:23:48.853 clat percentiles (usec): 00:23:48.853 | 1.00th=[ 3982], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:23:48.853 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:23:48.853 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:23:48.853 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 7177], 99.95th=[ 8586], 00:23:48.853 | 99.99th=[ 9503] 00:23:48.853 bw ( KiB/s): min=46480, max=47488, per=100.00%, avg=46948.00, stdev=426.61, samples=4 00:23:48.853 iops : min=11620, max=11872, avg=11737.00, stdev=106.65, samples=4 00:23:48.853 lat (msec) : 4=0.60%, 10=99.39%, 20=0.01% 00:23:48.853 cpu : usr=73.30%, sys=25.70%, ctx=91, majf=0, minf=2 00:23:48.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:48.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:48.853 issued rwts: total=23655,23533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:48.853 00:23:48.853 Run status group 0 (all jobs): 00:23:48.853 READ: bw=46.1MiB/s (48.3MB/s), 46.1MiB/s-46.1MiB/s (48.3MB/s-48.3MB/s), io=92.4MiB (96.9MB), run=2005-2005msec 00:23:48.853 WRITE: bw=45.8MiB/s (48.1MB/s), 45.8MiB/s-45.8MiB/s (48.1MB/s-48.1MB/s), io=91.9MiB (96.4MB), run=2005-2005msec 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:48.853 18:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:48.853 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:48.853 fio-3.35 00:23:48.853 Starting 1 thread 00:23:51.387 00:23:51.387 test: (groupid=0, jobs=1): err= 0: pid=513639: Tue Oct 8 18:31:44 2024 00:23:51.387 read: IOPS=11.0k, BW=172MiB/s (181MB/s)(346MiB/2005msec) 00:23:51.387 slat (usec): min=2, max=102, avg= 2.85, stdev= 1.37 00:23:51.387 clat (usec): min=1373, max=12604, avg=6701.96, stdev=1512.41 00:23:51.387 lat (usec): min=1376, max=12606, avg=6704.82, stdev=1512.50 00:23:51.387 clat percentiles (usec): 00:23:51.387 | 1.00th=[ 3556], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5342], 00:23:51.387 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7177], 00:23:51.387 | 70.00th=[ 7504], 80.00th=[ 7963], 90.00th=[ 8586], 95.00th=[ 9110], 00:23:51.387 | 99.00th=[10552], 99.50th=[10814], 99.90th=[11469], 99.95th=[11600], 00:23:51.387 | 99.99th=[12125] 00:23:51.387 bw ( KiB/s): min=83904, max=97280, per=50.15%, avg=88512.00, stdev=6016.85, samples=4 00:23:51.387 iops : min= 5244, max= 6080, avg=5532.00, stdev=376.05, samples=4 00:23:51.387 write: IOPS=6548, BW=102MiB/s (107MB/s)(181MiB/1766msec); 0 zone resets 00:23:51.387 slat (usec): min=29, max=294, avg=31.71, stdev= 5.66 00:23:51.387 clat (usec): min=2564, max=13928, avg=8603.78, stdev=1457.34 00:23:51.387 lat (usec): min=2597, max=13958, avg=8635.48, stdev=1457.93 00:23:51.387 clat percentiles (usec): 00:23:51.387 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7373], 00:23:51.387 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:23:51.387 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11207], 00:23:51.387 | 99.00th=[12125], 99.50th=[12780], 99.90th=[13435], 99.95th=[13566], 00:23:51.387 | 99.99th=[13698] 00:23:51.387 bw ( KiB/s): min=86496, max=101376, per=87.96%, avg=92160.00, stdev=6417.42, samples=4 00:23:51.387 iops : min= 5406, max= 6336, avg=5760.00, stdev=401.09, samples=4 00:23:51.387 lat (msec) : 2=0.02%, 4=1.71%, 10=90.86%, 20=7.41% 00:23:51.387 cpu : usr=85.48%, sys=13.82%, ctx=55, majf=0, minf=2 00:23:51.387 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:51.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:51.387 issued rwts: total=22115,11564,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:51.387 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:51.387 00:23:51.387 Run status group 0 (all jobs): 00:23:51.387 READ: bw=172MiB/s (181MB/s), 172MiB/s-172MiB/s (181MB/s-181MB/s), io=346MiB (362MB), run=2005-2005msec 00:23:51.387 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=181MiB (189MB), run=1766-1766msec 00:23:51.387 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:51.646 rmmod nvme_tcp 00:23:51.646 rmmod nvme_fabrics 00:23:51.646 rmmod nvme_keyring 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 512468 ']' 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 512468 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 512468 ']' 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 512468 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 512468 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 512468' 00:23:51.646 killing process with pid 512468 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 512468 00:23:51.646 18:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 512468 00:23:51.905 18:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:51.905 18:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:51.905 18:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:51.905 18:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:51.905 18:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:23:51.905 18:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:51.905 18:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:23:51.905 18:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:51.905 18:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:51.905 18:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.905 18:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.905 18:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.810 18:31:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.069 00:23:54.069 real 0m16.373s 00:23:54.069 user 0m48.655s 00:23:54.069 sys 0m6.591s 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.069 ************************************ 00:23:54.069 END TEST nvmf_fio_host 00:23:54.069 ************************************ 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.069 ************************************ 00:23:54.069 START TEST nvmf_failover 00:23:54.069 ************************************ 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:54.069 * Looking for test storage... 00:23:54.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:54.069 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:54.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.070 --rc genhtml_branch_coverage=1 00:23:54.070 --rc genhtml_function_coverage=1 00:23:54.070 --rc genhtml_legend=1 00:23:54.070 --rc geninfo_all_blocks=1 00:23:54.070 --rc geninfo_unexecuted_blocks=1 00:23:54.070 00:23:54.070 ' 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:54.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.070 --rc genhtml_branch_coverage=1 00:23:54.070 --rc genhtml_function_coverage=1 00:23:54.070 --rc genhtml_legend=1 00:23:54.070 --rc geninfo_all_blocks=1 00:23:54.070 --rc geninfo_unexecuted_blocks=1 00:23:54.070 00:23:54.070 ' 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:54.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.070 --rc genhtml_branch_coverage=1 00:23:54.070 --rc genhtml_function_coverage=1 00:23:54.070 --rc genhtml_legend=1 00:23:54.070 --rc geninfo_all_blocks=1 00:23:54.070 --rc geninfo_unexecuted_blocks=1 00:23:54.070 00:23:54.070 ' 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:54.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.070 --rc genhtml_branch_coverage=1 00:23:54.070 --rc genhtml_function_coverage=1 00:23:54.070 --rc genhtml_legend=1 00:23:54.070 --rc geninfo_all_blocks=1 00:23:54.070 --rc geninfo_unexecuted_blocks=1 00:23:54.070 00:23:54.070 ' 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.070 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:54.330 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:00.901 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:00.901 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:00.902 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:00.902 Found net devices under 0000:86:00.0: cvl_0_0 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:00.902 Found net devices under 0000:86:00.1: cvl_0_1 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:00.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:24:00.902 00:24:00.902 --- 10.0.0.2 ping statistics --- 00:24:00.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.902 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:24:00.902 00:24:00.902 --- 10.0.0.1 ping statistics --- 00:24:00.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.902 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=517610 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 517610 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 517610 ']' 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:00.902 18:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:00.902 [2024-10-08 18:31:53.462481] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:24:00.902 [2024-10-08 18:31:53.462525] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.902 [2024-10-08 18:31:53.533770] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:00.902 [2024-10-08 18:31:53.605027] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.902 [2024-10-08 18:31:53.605071] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.902 [2024-10-08 18:31:53.605078] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.902 [2024-10-08 18:31:53.605084] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.902 [2024-10-08 18:31:53.605089] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.902 [2024-10-08 18:31:53.606177] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.902 [2024-10-08 18:31:53.606288] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.902 [2024-10-08 18:31:53.606289] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.161 18:31:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:01.161 18:31:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:01.161 18:31:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:01.161 18:31:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:01.161 18:31:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:01.161 18:31:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.161 18:31:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:01.421 [2024-10-08 18:31:54.501077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.421 18:31:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:01.421 Malloc0 00:24:01.680 18:31:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.680 18:31:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:01.940 18:31:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.199 [2024-10-08 18:31:55.313735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.199 18:31:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:02.199 [2024-10-08 18:31:55.510273] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:02.458 18:31:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:02.458 [2024-10-08 18:31:55.706892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:02.458 18:31:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=517893 00:24:02.458 18:31:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:02.458 18:31:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:02.458 18:31:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 517893 /var/tmp/bdevperf.sock 00:24:02.458 18:31:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 517893 ']' 00:24:02.458 18:31:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.458 18:31:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:02.458 18:31:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.458 18:31:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:02.458 18:31:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:03.395 18:31:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:03.395 18:31:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:03.395 18:31:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:03.960 NVMe0n1 00:24:03.960 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:04.219 00:24:04.219 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=518281 00:24:04.219 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:04.219 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:05.156 18:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.413 [2024-10-08 18:31:58.614867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597390 is same with the state(6) to be set 00:24:05.413 [2024-10-08 18:31:58.614944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597390 is same with the state(6) to be set 00:24:05.413 [2024-10-08 18:31:58.614953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597390 is same with the state(6) to be set 00:24:05.413 [2024-10-08 18:31:58.614960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597390 is same with the state(6) to be set 00:24:05.413 [2024-10-08 18:31:58.614966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597390 is same with the state(6) to be set 00:24:05.413 [2024-10-08 18:31:58.614973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597390 is same with the state(6) to be set 00:24:05.413 18:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:08.699 18:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:08.957 00:24:08.957 18:32:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:09.217 [2024-10-08 18:32:02.292176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 [2024-10-08 18:32:02.292405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598190 is same with the state(6) to be set 00:24:09.217 18:32:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:12.503 18:32:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:12.503 [2024-10-08 18:32:05.506012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.503 18:32:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:13.439 18:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:13.439 18:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 518281 00:24:20.025 { 00:24:20.025 "results": [ 00:24:20.025 { 00:24:20.025 "job": "NVMe0n1", 00:24:20.025 "core_mask": "0x1", 00:24:20.025 "workload": "verify", 00:24:20.025 "status": "finished", 00:24:20.025 "verify_range": { 00:24:20.025 "start": 0, 00:24:20.025 "length": 16384 00:24:20.025 }, 00:24:20.025 "queue_depth": 128, 00:24:20.025 "io_size": 4096, 00:24:20.025 "runtime": 15.010448, 00:24:20.025 "iops": 11315.251883221606, 00:24:20.025 "mibps": 44.2002026688344, 00:24:20.025 "io_failed": 8373, 00:24:20.025 "io_timeout": 0, 00:24:20.025 "avg_latency_us": 10759.326039790307, 00:24:20.025 "min_latency_us": 454.4609523809524, 00:24:20.025 "max_latency_us": 19473.554285714286 00:24:20.025 } 00:24:20.025 ], 00:24:20.025 "core_count": 1 00:24:20.025 } 00:24:20.025 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 517893 00:24:20.025 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 517893 ']' 00:24:20.025 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 517893 00:24:20.025 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:20.025 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:20.025 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 517893 00:24:20.025 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:20.025 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:20.025 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 517893' 00:24:20.025 killing process with pid 517893 00:24:20.025 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 517893 00:24:20.025 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 517893 00:24:20.025 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:20.025 [2024-10-08 18:31:55.782078] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:24:20.025 [2024-10-08 18:31:55.782134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid517893 ] 00:24:20.025 [2024-10-08 18:31:55.850495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.025 [2024-10-08 18:31:55.924335] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.025 Running I/O for 15 seconds... 00:24:20.025 11207.00 IOPS, 43.78 MiB/s [2024-10-08T16:32:13.348Z] [2024-10-08 18:31:58.615176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.025 [2024-10-08 18:31:58.615209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.026 [2024-10-08 18:31:58.615234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.026 [2024-10-08 18:31:58.615250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.026 [2024-10-08 18:31:58.615265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.026 [2024-10-08 18:31:58.615280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.026 [2024-10-08 18:31:58.615294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.026 [2024-10-08 18:31:58.615309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.026 [2024-10-08 18:31:58.615323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.026 [2024-10-08 18:31:58.615691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.026 [2024-10-08 18:31:58.615846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.026 [2024-10-08 18:31:58.615852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.615860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.615867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.615875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.615881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.615890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.615896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.615904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.615910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.615918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.615924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.615932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.615941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.615949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.615956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.615964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.615970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.615978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.615984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.615993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.615999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-10-08 18:31:58.616477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-10-08 18:31:58.616485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.616988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.616997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-10-08 18:31:58.617004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.617011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.028 [2024-10-08 18:31:58.617018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.617025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.028 [2024-10-08 18:31:58.617032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.617040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.028 [2024-10-08 18:31:58.617046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.617054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.028 [2024-10-08 18:31:58.617062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.617070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.028 [2024-10-08 18:31:58.617078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.617086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.028 [2024-10-08 18:31:58.617093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.028 [2024-10-08 18:31:58.617100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bdc60 is same with the state(6) to be set 00:24:20.028 [2024-10-08 18:31:58.617109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.029 [2024-10-08 18:31:58.617114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.029 [2024-10-08 18:31:58.617120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98976 len:8 PRP1 0x0 PRP2 0x0 00:24:20.029 [2024-10-08 18:31:58.617126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:31:58.617168] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15bdc60 was disconnected and freed. reset controller. 00:24:20.029 [2024-10-08 18:31:58.617177] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:20.029 [2024-10-08 18:31:58.617199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.029 [2024-10-08 18:31:58.617207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:31:58.617214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.029 [2024-10-08 18:31:58.617220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:31:58.617227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.029 [2024-10-08 18:31:58.617233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:31:58.617240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.029 [2024-10-08 18:31:58.617247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:31:58.617253] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:20.029 [2024-10-08 18:31:58.617290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159b400 (9): Bad file descriptor 00:24:20.029 [2024-10-08 18:31:58.620058] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:20.029 [2024-10-08 18:31:58.693477] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:20.029 10858.50 IOPS, 42.42 MiB/s [2024-10-08T16:32:13.352Z] 11024.67 IOPS, 43.07 MiB/s [2024-10-08T16:32:13.352Z] 11149.50 IOPS, 43.55 MiB/s [2024-10-08T16:32:13.352Z] [2024-10-08 18:32:02.293205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.029 [2024-10-08 18:32:02.293644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.029 [2024-10-08 18:32:02.293652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.030 [2024-10-08 18:32:02.293659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.030 [2024-10-08 18:32:02.293673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.030 [2024-10-08 18:32:02.293688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.030 [2024-10-08 18:32:02.293702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.293987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.293993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.294001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.294009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.294017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.294024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.294032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.294039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.294046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.294053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.294061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.294067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.294075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.294082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.294090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.294096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.294104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.294110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.294118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.294124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.294132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.294138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.294146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.294153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.030 [2024-10-08 18:32:02.294161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.030 [2024-10-08 18:32:02.294167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.031 [2024-10-08 18:32:02.294700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.031 [2024-10-08 18:32:02.294727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61520 len:8 PRP1 0x0 PRP2 0x0 00:24:20.031 [2024-10-08 18:32:02.294734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.031 [2024-10-08 18:32:02.294747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.031 [2024-10-08 18:32:02.294755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61528 len:8 PRP1 0x0 PRP2 0x0 00:24:20.031 [2024-10-08 18:32:02.294761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.031 [2024-10-08 18:32:02.294774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.031 [2024-10-08 18:32:02.294779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61536 len:8 PRP1 0x0 PRP2 0x0 00:24:20.031 [2024-10-08 18:32:02.294786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.031 [2024-10-08 18:32:02.294793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.031 [2024-10-08 18:32:02.294797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.031 [2024-10-08 18:32:02.294803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61544 len:8 PRP1 0x0 PRP2 0x0 00:24:20.031 [2024-10-08 18:32:02.294809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.294815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.294820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.294825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61552 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.294831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.294838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.294843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.294849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61560 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.294856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.294862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.294867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.294872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61568 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.294879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.294885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.294890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.294896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61576 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.294902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.294908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.294913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.294918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61584 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.294924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.294931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.294936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.294942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61592 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.294950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.294957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.294962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.294967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61600 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.294973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.294980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.294984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.294990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61608 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.294996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.295002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.295007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.295013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61616 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.295019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.295025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.295030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.295036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61624 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.295043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.295050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.295054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.295060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60856 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.295066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.295073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.295077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.295083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60864 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.295089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.295095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.295100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.295106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60872 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.295112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.295119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.295124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.295135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60880 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.295142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.295148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.295153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.295159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60888 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.295165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.295172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.295176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.295182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60896 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.295188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.295195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.295200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.295205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60904 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.295212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.295218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.295223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.295231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60912 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.295237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.295244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.295249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.295254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60920 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.295260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.295267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.295272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.305336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60928 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.305349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.305358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.305364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.305370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60936 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.305381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.305389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.305396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.305403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60944 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.305411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.305418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.305424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.305430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60952 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.305438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.305445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.305451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.305457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60960 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.305465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.032 [2024-10-08 18:32:02.305472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.032 [2024-10-08 18:32:02.305478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.032 [2024-10-08 18:32:02.305484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60968 len:8 PRP1 0x0 PRP2 0x0 00:24:20.032 [2024-10-08 18:32:02.305491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:02.305533] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16f47d0 was disconnected and freed. reset controller. 00:24:20.033 [2024-10-08 18:32:02.305543] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:20.033 [2024-10-08 18:32:02.305567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.033 [2024-10-08 18:32:02.305576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:02.305585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.033 [2024-10-08 18:32:02.305592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:02.305601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.033 [2024-10-08 18:32:02.305608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:02.305616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.033 [2024-10-08 18:32:02.305623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:02.305631] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:20.033 [2024-10-08 18:32:02.305656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159b400 (9): Bad file descriptor 00:24:20.033 [2024-10-08 18:32:02.308880] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:20.033 [2024-10-08 18:32:02.384440] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:20.033 11001.80 IOPS, 42.98 MiB/s [2024-10-08T16:32:13.356Z] 11098.67 IOPS, 43.35 MiB/s [2024-10-08T16:32:13.356Z] 11135.00 IOPS, 43.50 MiB/s [2024-10-08T16:32:13.356Z] 11183.88 IOPS, 43.69 MiB/s [2024-10-08T16:32:13.356Z] 11218.89 IOPS, 43.82 MiB/s [2024-10-08T16:32:13.356Z] [2024-10-08 18:32:06.717187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.033 [2024-10-08 18:32:06.717503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.033 [2024-10-08 18:32:06.717518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.033 [2024-10-08 18:32:06.717533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.033 [2024-10-08 18:32:06.717547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.033 [2024-10-08 18:32:06.717562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.033 [2024-10-08 18:32:06.717576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.033 [2024-10-08 18:32:06.717591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.033 [2024-10-08 18:32:06.717607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.033 [2024-10-08 18:32:06.717622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.033 [2024-10-08 18:32:06.717637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.033 [2024-10-08 18:32:06.717651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.033 [2024-10-08 18:32:06.717665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.033 [2024-10-08 18:32:06.717680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.033 [2024-10-08 18:32:06.717694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.033 [2024-10-08 18:32:06.717708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.033 [2024-10-08 18:32:06.717722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.033 [2024-10-08 18:32:06.717737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.033 [2024-10-08 18:32:06.717745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.717759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.717774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.717788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.717805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.717818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.717833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.717847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.717861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.717875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.717889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.717904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.717918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.717933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.717947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.717961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.717978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.717992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.717998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.034 [2024-10-08 18:32:06.718267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.034 [2024-10-08 18:32:06.718273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.035 [2024-10-08 18:32:06.718406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.035 [2024-10-08 18:32:06.718892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.035 [2024-10-08 18:32:06.718898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.718909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.036 [2024-10-08 18:32:06.718916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.718924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.036 [2024-10-08 18:32:06.718930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.718938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.036 [2024-10-08 18:32:06.718945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.718953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.036 [2024-10-08 18:32:06.718960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.718967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.036 [2024-10-08 18:32:06.718974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.718982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.036 [2024-10-08 18:32:06.718988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.718996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.036 [2024-10-08 18:32:06.719003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.719011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.036 [2024-10-08 18:32:06.719017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.719025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.036 [2024-10-08 18:32:06.719031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.719038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.036 [2024-10-08 18:32:06.719045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.719053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.036 [2024-10-08 18:32:06.719059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.719067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.036 [2024-10-08 18:32:06.719073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.719081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.036 [2024-10-08 18:32:06.719091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.719101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.036 [2024-10-08 18:32:06.719108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.719116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f7010 is same with the state(6) to be set 00:24:20.036 [2024-10-08 18:32:06.719125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.036 [2024-10-08 18:32:06.719130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.036 [2024-10-08 18:32:06.719136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92232 len:8 PRP1 0x0 PRP2 0x0 00:24:20.036 [2024-10-08 18:32:06.719142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.719184] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16f7010 was disconnected and freed. reset controller. 00:24:20.036 [2024-10-08 18:32:06.719194] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:20.036 [2024-10-08 18:32:06.719218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.036 [2024-10-08 18:32:06.719226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.719233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.036 [2024-10-08 18:32:06.719240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.719247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.036 [2024-10-08 18:32:06.719253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.719260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.036 [2024-10-08 18:32:06.719267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.036 [2024-10-08 18:32:06.719273] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:20.036 [2024-10-08 18:32:06.722054] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:20.036 [2024-10-08 18:32:06.722086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159b400 (9): Bad file descriptor 00:24:20.036 [2024-10-08 18:32:06.756663] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:20.036 11215.50 IOPS, 43.81 MiB/s [2024-10-08T16:32:13.359Z] 11239.73 IOPS, 43.91 MiB/s [2024-10-08T16:32:13.359Z] 11267.83 IOPS, 44.01 MiB/s [2024-10-08T16:32:13.359Z] 11278.54 IOPS, 44.06 MiB/s [2024-10-08T16:32:13.359Z] 11302.07 IOPS, 44.15 MiB/s [2024-10-08T16:32:13.359Z] 11315.07 IOPS, 44.20 MiB/s 00:24:20.036 Latency(us) 00:24:20.036 [2024-10-08T16:32:13.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.036 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:20.036 Verification LBA range: start 0x0 length 0x4000 00:24:20.036 NVMe0n1 : 15.01 11315.25 44.20 557.81 0.00 10759.33 454.46 19473.55 00:24:20.036 [2024-10-08T16:32:13.359Z] =================================================================================================================== 00:24:20.036 [2024-10-08T16:32:13.359Z] Total : 11315.25 44.20 557.81 0.00 10759.33 454.46 19473.55 00:24:20.036 Received shutdown signal, test time was about 15.000000 seconds 00:24:20.036 00:24:20.036 Latency(us) 00:24:20.036 [2024-10-08T16:32:13.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.036 [2024-10-08T16:32:13.359Z] =================================================================================================================== 00:24:20.036 [2024-10-08T16:32:13.359Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.036 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:20.036 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:20.036 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:20.036 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=520678 00:24:20.036 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:20.036 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 520678 /var/tmp/bdevperf.sock 00:24:20.036 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 520678 ']' 00:24:20.036 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.036 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:20.036 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.036 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:20.036 18:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:20.604 18:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:20.604 18:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:20.604 18:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:20.604 [2024-10-08 18:32:13.899184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:20.863 18:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:20.863 [2024-10-08 18:32:14.091681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:20.863 18:32:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:21.121 NVMe0n1 00:24:21.122 18:32:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:21.696 00:24:21.696 18:32:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:21.955 00:24:21.955 18:32:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:21.955 18:32:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:22.213 18:32:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:22.213 18:32:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:25.502 18:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:25.502 18:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:25.502 18:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=521708 00:24:25.502 18:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:25.502 18:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 521708 00:24:26.878 { 00:24:26.878 "results": [ 00:24:26.878 { 00:24:26.878 "job": "NVMe0n1", 00:24:26.878 "core_mask": "0x1", 00:24:26.878 "workload": "verify", 00:24:26.878 "status": "finished", 00:24:26.878 "verify_range": { 00:24:26.878 "start": 0, 00:24:26.878 "length": 16384 00:24:26.878 }, 00:24:26.878 "queue_depth": 128, 00:24:26.878 "io_size": 4096, 00:24:26.878 "runtime": 1.005448, 00:24:26.878 "iops": 11462.55201661349, 00:24:26.878 "mibps": 44.775593814896446, 00:24:26.878 "io_failed": 0, 00:24:26.878 "io_timeout": 0, 00:24:26.878 "avg_latency_us": 11127.393386303067, 00:24:26.878 "min_latency_us": 2122.118095238095, 00:24:26.878 "max_latency_us": 9487.11619047619 00:24:26.878 } 00:24:26.878 ], 00:24:26.878 "core_count": 1 00:24:26.878 } 00:24:26.878 18:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:26.878 [2024-10-08 18:32:12.896604] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:24:26.878 [2024-10-08 18:32:12.896660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520678 ] 00:24:26.878 [2024-10-08 18:32:12.964245] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.878 [2024-10-08 18:32:13.033467] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.878 [2024-10-08 18:32:15.445135] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:26.878 [2024-10-08 18:32:15.445179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.878 [2024-10-08 18:32:15.445190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.878 [2024-10-08 18:32:15.445199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.878 [2024-10-08 18:32:15.445206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.879 [2024-10-08 18:32:15.445213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.879 [2024-10-08 18:32:15.445220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.879 [2024-10-08 18:32:15.445227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.879 [2024-10-08 18:32:15.445234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.879 [2024-10-08 18:32:15.445241] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.879 [2024-10-08 18:32:15.445265] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.879 [2024-10-08 18:32:15.445279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a2400 (9): Bad file descriptor 00:24:26.879 [2024-10-08 18:32:15.455706] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:26.879 Running I/O for 1 seconds... 00:24:26.879 11397.00 IOPS, 44.52 MiB/s 00:24:26.879 Latency(us) 00:24:26.879 [2024-10-08T16:32:20.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.879 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:26.879 Verification LBA range: start 0x0 length 0x4000 00:24:26.879 NVMe0n1 : 1.01 11462.55 44.78 0.00 0.00 11127.39 2122.12 9487.12 00:24:26.879 [2024-10-08T16:32:20.202Z] =================================================================================================================== 00:24:26.879 [2024-10-08T16:32:20.202Z] Total : 11462.55 44.78 0.00 0.00 11127.39 2122.12 9487.12 00:24:26.879 18:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:26.879 18:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:26.879 18:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:27.138 18:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:27.138 18:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:27.138 18:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:27.397 18:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:30.686 18:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:30.686 18:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:30.686 18:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 520678 00:24:30.686 18:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 520678 ']' 00:24:30.686 18:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 520678 00:24:30.686 18:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:30.686 18:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:30.686 18:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 520678 00:24:30.686 18:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:30.686 18:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:30.686 18:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 520678' 00:24:30.686 killing process with pid 520678 00:24:30.686 18:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 520678 00:24:30.686 18:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 520678 00:24:30.945 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:30.945 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:31.204 rmmod nvme_tcp 00:24:31.204 rmmod nvme_fabrics 00:24:31.204 rmmod nvme_keyring 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 517610 ']' 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 517610 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 517610 ']' 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 517610 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 517610 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 517610' 00:24:31.204 killing process with pid 517610 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 517610 00:24:31.204 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 517610 00:24:31.464 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:31.464 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:31.464 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:31.464 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:31.464 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:24:31.464 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:31.464 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:24:31.464 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:31.464 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:31.464 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.464 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.464 18:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:33.421 00:24:33.421 real 0m39.490s 00:24:33.421 user 2m6.115s 00:24:33.421 sys 0m8.093s 00:24:33.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:33.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:33.421 ************************************ 00:24:33.421 END TEST nvmf_failover 00:24:33.421 ************************************ 00:24:33.421 18:32:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:33.421 18:32:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:33.421 18:32:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:33.421 18:32:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.681 ************************************ 00:24:33.681 START TEST nvmf_host_discovery 00:24:33.681 ************************************ 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:33.681 * Looking for test storage... 00:24:33.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:33.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.681 --rc genhtml_branch_coverage=1 00:24:33.681 --rc genhtml_function_coverage=1 00:24:33.681 --rc genhtml_legend=1 00:24:33.681 --rc geninfo_all_blocks=1 00:24:33.681 --rc geninfo_unexecuted_blocks=1 00:24:33.681 00:24:33.681 ' 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:33.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.681 --rc genhtml_branch_coverage=1 00:24:33.681 --rc genhtml_function_coverage=1 00:24:33.681 --rc genhtml_legend=1 00:24:33.681 --rc geninfo_all_blocks=1 00:24:33.681 --rc geninfo_unexecuted_blocks=1 00:24:33.681 00:24:33.681 ' 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:33.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.681 --rc genhtml_branch_coverage=1 00:24:33.681 --rc genhtml_function_coverage=1 00:24:33.681 --rc genhtml_legend=1 00:24:33.681 --rc geninfo_all_blocks=1 00:24:33.681 --rc geninfo_unexecuted_blocks=1 00:24:33.681 00:24:33.681 ' 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:33.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.681 --rc genhtml_branch_coverage=1 00:24:33.681 --rc genhtml_function_coverage=1 00:24:33.681 --rc genhtml_legend=1 00:24:33.681 --rc geninfo_all_blocks=1 00:24:33.681 --rc geninfo_unexecuted_blocks=1 00:24:33.681 00:24:33.681 ' 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.681 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:33.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:33.682 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:40.255 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:40.255 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:40.255 Found net devices under 0000:86:00.0: cvl_0_0 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:40.255 Found net devices under 0000:86:00.1: cvl_0_1 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:40.255 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:40.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:40.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:24:40.256 00:24:40.256 --- 10.0.0.2 ping statistics --- 00:24:40.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.256 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:40.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:40.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:24:40.256 00:24:40.256 --- 10.0.0.1 ping statistics --- 00:24:40.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.256 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=526130 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 526130 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 526130 ']' 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:40.256 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.256 [2024-10-08 18:32:33.014036] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:24:40.256 [2024-10-08 18:32:33.014081] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.256 [2024-10-08 18:32:33.085936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.256 [2024-10-08 18:32:33.156006] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.256 [2024-10-08 18:32:33.156050] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.256 [2024-10-08 18:32:33.156057] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.256 [2024-10-08 18:32:33.156063] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.256 [2024-10-08 18:32:33.156068] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.256 [2024-10-08 18:32:33.156660] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.561 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:40.561 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:40.561 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:40.561 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:40.561 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.561 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.561 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:40.561 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.561 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.881 [2024-10-08 18:32:33.884633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.881 [2024-10-08 18:32:33.896823] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.881 null0 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.881 null1 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=526273 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 526273 /tmp/host.sock 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 526273 ']' 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:40.881 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:40.881 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.881 [2024-10-08 18:32:33.975999] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:24:40.881 [2024-10-08 18:32:33.976041] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid526273 ] 00:24:40.881 [2024-10-08 18:32:34.042082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.881 [2024-10-08 18:32:34.126393] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:41.818 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.818 [2024-10-08 18:32:35.124063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.818 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.077 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.078 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.078 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:42.078 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:42.078 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:42.078 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:42.078 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:42.078 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:42.078 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:42.078 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:42.078 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.078 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:42.078 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.078 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:42.078 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.078 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:24:42.078 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:42.645 [2024-10-08 18:32:35.842545] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:42.645 [2024-10-08 18:32:35.842567] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:42.645 [2024-10-08 18:32:35.842579] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:42.645 [2024-10-08 18:32:35.928839] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:42.904 [2024-10-08 18:32:36.107025] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:42.904 [2024-10-08 18:32:36.107048] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.163 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.423 [2024-10-08 18:32:36.608132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:43.423 [2024-10-08 18:32:36.608309] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:43.423 [2024-10-08 18:32:36.608334] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:43.423 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.423 [2024-10-08 18:32:36.735724] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:43.682 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:43.682 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:43.682 [2024-10-08 18:32:36.960873] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:43.682 [2024-10-08 18:32:36.960891] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:43.682 [2024-10-08 18:32:36.960896] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:44.617 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:44.617 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:44.617 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:44.617 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:44.617 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:44.617 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.617 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:44.617 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.617 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.618 [2024-10-08 18:32:37.860320] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:44.618 [2024-10-08 18:32:37.860343] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:44.618 [2024-10-08 18:32:37.869308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.618 [2024-10-08 18:32:37.869329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.618 [2024-10-08 18:32:37.869338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.618 [2024-10-08 18:32:37.869345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.618 [2024-10-08 18:32:37.869369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.618 [2024-10-08 18:32:37.869382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.618 [2024-10-08 18:32:37.869390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.618 [2024-10-08 18:32:37.869397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.618 [2024-10-08 18:32:37.869405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3f450 is same with the state(6) to be set 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.618 [2024-10-08 18:32:37.879319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3f450 (9): Bad file descriptor 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.618 [2024-10-08 18:32:37.889358] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:44.618 [2024-10-08 18:32:37.889627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.618 [2024-10-08 18:32:37.889643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3f450 with addr=10.0.0.2, port=4420 00:24:44.618 [2024-10-08 18:32:37.889651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3f450 is same with the state(6) to be set 00:24:44.618 [2024-10-08 18:32:37.889667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3f450 (9): Bad file descriptor 00:24:44.618 [2024-10-08 18:32:37.889684] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:44.618 [2024-10-08 18:32:37.889692] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:44.618 [2024-10-08 18:32:37.889700] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:44.618 [2024-10-08 18:32:37.889710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.618 [2024-10-08 18:32:37.899417] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:44.618 [2024-10-08 18:32:37.899670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.618 [2024-10-08 18:32:37.899683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3f450 with addr=10.0.0.2, port=4420 00:24:44.618 [2024-10-08 18:32:37.899690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3f450 is same with the state(6) to be set 00:24:44.618 [2024-10-08 18:32:37.899701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3f450 (9): Bad file descriptor 00:24:44.618 [2024-10-08 18:32:37.899717] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:44.618 [2024-10-08 18:32:37.899724] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:44.618 [2024-10-08 18:32:37.899731] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:44.618 [2024-10-08 18:32:37.899741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.618 [2024-10-08 18:32:37.909466] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:44.618 [2024-10-08 18:32:37.909713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.618 [2024-10-08 18:32:37.909726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3f450 with addr=10.0.0.2, port=4420 00:24:44.618 [2024-10-08 18:32:37.909733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3f450 is same with the state(6) to be set 00:24:44.618 [2024-10-08 18:32:37.909744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3f450 (9): Bad file descriptor 00:24:44.618 [2024-10-08 18:32:37.909760] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:44.618 [2024-10-08 18:32:37.909766] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:44.618 [2024-10-08 18:32:37.909773] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:44.618 [2024-10-08 18:32:37.909782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.618 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:44.618 [2024-10-08 18:32:37.919521] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:44.618 [2024-10-08 18:32:37.919637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.618 [2024-10-08 18:32:37.919650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3f450 with addr=10.0.0.2, port=4420 00:24:44.618 [2024-10-08 18:32:37.919659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3f450 is same with the state(6) to be set 00:24:44.618 [2024-10-08 18:32:37.919671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3f450 (9): Bad file descriptor 00:24:44.618 [2024-10-08 18:32:37.919682] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:44.618 [2024-10-08 18:32:37.919691] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:44.618 [2024-10-08 18:32:37.919700] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:44.618 [2024-10-08 18:32:37.919711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.618 [2024-10-08 18:32:37.929575] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:44.618 [2024-10-08 18:32:37.929814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.619 [2024-10-08 18:32:37.929827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3f450 with addr=10.0.0.2, port=4420 00:24:44.619 [2024-10-08 18:32:37.929834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3f450 is same with the state(6) to be set 00:24:44.619 [2024-10-08 18:32:37.929845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3f450 (9): Bad file descriptor 00:24:44.619 [2024-10-08 18:32:37.929855] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:44.619 [2024-10-08 18:32:37.929861] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:44.619 [2024-10-08 18:32:37.929868] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:44.619 [2024-10-08 18:32:37.929877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.878 [2024-10-08 18:32:37.939627] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:44.878 [2024-10-08 18:32:37.939863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.878 [2024-10-08 18:32:37.939875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3f450 with addr=10.0.0.2, port=4420 00:24:44.878 [2024-10-08 18:32:37.939882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3f450 is same with the state(6) to be set 00:24:44.878 [2024-10-08 18:32:37.939891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3f450 (9): Bad file descriptor 00:24:44.878 [2024-10-08 18:32:37.939901] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:44.878 [2024-10-08 18:32:37.939907] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:44.878 [2024-10-08 18:32:37.939917] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:44.878 [2024-10-08 18:32:37.939926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.878 [2024-10-08 18:32:37.948253] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:44.878 [2024-10-08 18:32:37.948269] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:44.878 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.878 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:44.878 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:44.878 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:44.878 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:44.878 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:44.878 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:44.878 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:44.878 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:44.878 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:44.878 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:44.878 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:44.878 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.878 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.878 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:44.878 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:44.878 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:44.879 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:44.879 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.879 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.879 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:44.879 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.137 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:45.137 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:45.137 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:45.137 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:45.137 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:45.137 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.137 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.074 [2024-10-08 18:32:39.271533] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:46.074 [2024-10-08 18:32:39.271554] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:46.074 [2024-10-08 18:32:39.271567] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:46.074 [2024-10-08 18:32:39.359825] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:46.333 [2024-10-08 18:32:39.589116] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:46.333 [2024-10-08 18:32:39.589145] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.333 request: 00:24:46.333 { 00:24:46.333 "name": "nvme", 00:24:46.333 "trtype": "tcp", 00:24:46.333 "traddr": "10.0.0.2", 00:24:46.333 "adrfam": "ipv4", 00:24:46.333 "trsvcid": "8009", 00:24:46.333 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:46.333 "wait_for_attach": true, 00:24:46.333 "method": "bdev_nvme_start_discovery", 00:24:46.333 "req_id": 1 00:24:46.333 } 00:24:46.333 Got JSON-RPC error response 00:24:46.333 response: 00:24:46.333 { 00:24:46.333 "code": -17, 00:24:46.333 "message": "File exists" 00:24:46.333 } 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:46.333 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.592 request: 00:24:46.592 { 00:24:46.592 "name": "nvme_second", 00:24:46.592 "trtype": "tcp", 00:24:46.592 "traddr": "10.0.0.2", 00:24:46.592 "adrfam": "ipv4", 00:24:46.592 "trsvcid": "8009", 00:24:46.592 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:46.592 "wait_for_attach": true, 00:24:46.592 "method": "bdev_nvme_start_discovery", 00:24:46.592 "req_id": 1 00:24:46.592 } 00:24:46.592 Got JSON-RPC error response 00:24:46.592 response: 00:24:46.592 { 00:24:46.592 "code": -17, 00:24:46.592 "message": "File exists" 00:24:46.592 } 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.527 [2024-10-08 18:32:40.828617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.527 [2024-10-08 18:32:40.828659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e70c90 with addr=10.0.0.2, port=8010 00:24:47.527 [2024-10-08 18:32:40.828696] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:47.527 [2024-10-08 18:32:40.828704] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:47.527 [2024-10-08 18:32:40.828711] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:48.904 [2024-10-08 18:32:41.831043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.904 [2024-10-08 18:32:41.831073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e70c90 with addr=10.0.0.2, port=8010 00:24:48.904 [2024-10-08 18:32:41.831089] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:48.904 [2024-10-08 18:32:41.831096] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:48.904 [2024-10-08 18:32:41.831102] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:49.840 [2024-10-08 18:32:42.833199] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:49.840 request: 00:24:49.840 { 00:24:49.840 "name": "nvme_second", 00:24:49.840 "trtype": "tcp", 00:24:49.840 "traddr": "10.0.0.2", 00:24:49.840 "adrfam": "ipv4", 00:24:49.840 "trsvcid": "8010", 00:24:49.840 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:49.840 "wait_for_attach": false, 00:24:49.840 "attach_timeout_ms": 3000, 00:24:49.840 "method": "bdev_nvme_start_discovery", 00:24:49.840 "req_id": 1 00:24:49.840 } 00:24:49.840 Got JSON-RPC error response 00:24:49.840 response: 00:24:49.840 { 00:24:49.840 "code": -110, 00:24:49.840 "message": "Connection timed out" 00:24:49.840 } 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 526273 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:49.840 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:49.840 rmmod nvme_tcp 00:24:49.840 rmmod nvme_fabrics 00:24:49.840 rmmod nvme_keyring 00:24:49.841 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:49.841 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:49.841 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:49.841 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 526130 ']' 00:24:49.841 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 526130 00:24:49.841 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 526130 ']' 00:24:49.841 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 526130 00:24:49.841 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:24:49.841 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:49.841 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 526130 00:24:49.841 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:49.841 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:49.841 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 526130' 00:24:49.841 killing process with pid 526130 00:24:49.841 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 526130 00:24:49.841 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 526130 00:24:50.100 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:50.100 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:50.100 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:50.100 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:50.100 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:24:50.100 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:50.100 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:24:50.100 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:50.100 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:50.100 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.100 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.100 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.004 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:52.004 00:24:52.004 real 0m18.503s 00:24:52.004 user 0m22.613s 00:24:52.004 sys 0m5.929s 00:24:52.004 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:52.004 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.004 ************************************ 00:24:52.004 END TEST nvmf_host_discovery 00:24:52.004 ************************************ 00:24:52.004 18:32:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:52.004 18:32:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:52.004 18:32:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:52.004 18:32:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.264 ************************************ 00:24:52.264 START TEST nvmf_host_multipath_status 00:24:52.264 ************************************ 00:24:52.264 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:52.264 * Looking for test storage... 00:24:52.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.264 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:52.264 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:24:52.264 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:52.264 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:52.264 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:52.264 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:52.264 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:52.264 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.264 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:52.264 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:52.264 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:52.264 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:52.264 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:52.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.265 --rc genhtml_branch_coverage=1 00:24:52.265 --rc genhtml_function_coverage=1 00:24:52.265 --rc genhtml_legend=1 00:24:52.265 --rc geninfo_all_blocks=1 00:24:52.265 --rc geninfo_unexecuted_blocks=1 00:24:52.265 00:24:52.265 ' 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:52.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.265 --rc genhtml_branch_coverage=1 00:24:52.265 --rc genhtml_function_coverage=1 00:24:52.265 --rc genhtml_legend=1 00:24:52.265 --rc geninfo_all_blocks=1 00:24:52.265 --rc geninfo_unexecuted_blocks=1 00:24:52.265 00:24:52.265 ' 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:52.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.265 --rc genhtml_branch_coverage=1 00:24:52.265 --rc genhtml_function_coverage=1 00:24:52.265 --rc genhtml_legend=1 00:24:52.265 --rc geninfo_all_blocks=1 00:24:52.265 --rc geninfo_unexecuted_blocks=1 00:24:52.265 00:24:52.265 ' 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:52.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.265 --rc genhtml_branch_coverage=1 00:24:52.265 --rc genhtml_function_coverage=1 00:24:52.265 --rc genhtml_legend=1 00:24:52.265 --rc geninfo_all_blocks=1 00:24:52.265 --rc geninfo_unexecuted_blocks=1 00:24:52.265 00:24:52.265 ' 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:52.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:52.265 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:52.266 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:52.266 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:52.266 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.266 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:52.266 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:52.266 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:52.266 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.266 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.266 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.266 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:52.266 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:52.266 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:52.266 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:58.837 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:58.837 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:58.837 Found net devices under 0000:86:00.0: cvl_0_0 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:58.837 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:58.838 Found net devices under 0000:86:00.1: cvl_0_1 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:58.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:24:58.838 00:24:58.838 --- 10.0.0.2 ping statistics --- 00:24:58.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.838 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:24:58.838 00:24:58.838 --- 10.0.0.1 ping statistics --- 00:24:58.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.838 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=531407 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 531407 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 531407 ']' 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:58.838 18:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:58.838 [2024-10-08 18:32:51.533737] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:24:58.838 [2024-10-08 18:32:51.533787] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.838 [2024-10-08 18:32:51.604151] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:58.838 [2024-10-08 18:32:51.683105] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.838 [2024-10-08 18:32:51.683142] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.838 [2024-10-08 18:32:51.683150] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.838 [2024-10-08 18:32:51.683156] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.838 [2024-10-08 18:32:51.683161] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.838 [2024-10-08 18:32:51.683968] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.838 [2024-10-08 18:32:51.683971] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.097 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:59.097 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:59.097 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:59.097 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:59.097 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:59.097 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.097 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=531407 00:24:59.097 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:59.356 [2024-10-08 18:32:52.569845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.356 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:59.615 Malloc0 00:24:59.615 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:59.874 18:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:00.132 18:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.132 [2024-10-08 18:32:53.379154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.132 18:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:00.390 [2024-10-08 18:32:53.559620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:00.390 18:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:00.390 18:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=531831 00:25:00.390 18:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:00.390 18:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 531831 /var/tmp/bdevperf.sock 00:25:00.390 18:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 531831 ']' 00:25:00.390 18:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:00.390 18:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:00.390 18:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:00.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:00.390 18:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:00.390 18:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:01.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:01.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:01.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:01.585 18:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:01.844 Nvme0n1 00:25:01.844 18:32:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:02.103 Nvme0n1 00:25:02.103 18:32:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:02.103 18:32:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:04.638 18:32:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:04.638 18:32:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:04.638 18:32:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:04.638 18:32:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:05.574 18:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:05.574 18:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:05.574 18:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.574 18:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:05.832 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.832 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:05.832 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.832 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:06.091 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:06.091 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:06.091 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.091 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:06.350 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.350 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:06.350 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.350 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:06.350 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.350 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:06.350 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.350 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:06.609 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.609 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:06.609 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.609 18:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:06.868 18:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.868 18:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:06.868 18:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:07.127 18:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:07.386 18:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:08.323 18:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:08.323 18:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:08.323 18:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.323 18:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:08.582 18:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.582 18:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:08.582 18:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.582 18:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:08.582 18:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.582 18:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:08.582 18:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.582 18:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:08.841 18:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.841 18:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:08.841 18:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:08.841 18:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.100 18:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.100 18:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:09.100 18:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.100 18:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:09.360 18:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.360 18:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:09.360 18:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.360 18:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:09.619 18:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.619 18:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:09.619 18:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:09.619 18:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:09.878 18:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:10.815 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:10.815 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:10.815 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.815 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:11.074 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.074 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:11.074 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.074 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:11.333 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:11.333 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:11.333 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.333 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:11.592 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.592 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:11.592 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.592 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:11.851 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.851 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:11.851 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.851 18:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:12.109 18:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.109 18:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:12.109 18:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.109 18:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:12.109 18:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.109 18:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:12.109 18:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:12.368 18:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:12.626 18:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:13.563 18:33:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:13.563 18:33:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:13.563 18:33:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.563 18:33:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:13.821 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.821 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:13.821 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:13.821 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.079 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:14.079 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:14.079 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.079 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:14.337 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.337 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:14.337 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.337 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:14.596 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.596 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:14.596 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.596 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:14.596 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.596 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:14.596 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.596 18:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:14.854 18:33:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:14.854 18:33:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:14.854 18:33:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:15.114 18:33:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:15.376 18:33:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:16.315 18:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:16.315 18:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:16.315 18:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.315 18:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:16.573 18:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:16.573 18:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:16.573 18:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.573 18:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:16.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:16.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:16.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:16.832 18:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.832 18:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:16.832 18:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.832 18:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:17.089 18:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.089 18:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:17.089 18:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.089 18:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:17.348 18:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:17.348 18:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:17.348 18:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:17.348 18:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.606 18:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:17.606 18:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:17.606 18:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:17.606 18:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:17.865 18:33:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:18.799 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:18.799 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:18.799 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.799 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:19.056 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:19.056 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:19.056 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:19.056 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.314 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.314 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:19.314 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.314 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:19.573 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.573 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:19.573 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.573 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:19.831 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.831 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:19.831 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.831 18:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:19.831 18:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:19.831 18:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:19.831 18:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:19.831 18:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.090 18:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.090 18:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:20.348 18:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:20.348 18:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:20.606 18:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:20.865 18:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:21.801 18:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:21.801 18:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:21.801 18:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.801 18:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:22.060 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.060 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:22.060 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.060 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:22.060 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.060 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:22.060 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.060 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:22.318 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.318 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:22.318 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.318 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:22.577 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.577 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:22.577 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.577 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:22.835 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.835 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:22.835 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.835 18:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:23.094 18:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.094 18:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:23.094 18:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:23.094 18:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:23.353 18:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:24.288 18:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:24.288 18:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:24.288 18:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.288 18:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:24.546 18:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:24.546 18:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:24.546 18:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.546 18:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:24.804 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.804 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:24.804 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.804 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:25.063 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.063 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:25.063 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:25.063 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.401 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.401 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:25.401 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.401 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:25.401 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.401 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:25.401 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.401 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:25.694 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.694 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:25.694 18:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:25.953 18:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:25.953 18:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:27.346 18:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:27.346 18:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:27.346 18:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.346 18:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:27.346 18:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.346 18:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:27.346 18:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:27.346 18:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.604 18:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.604 18:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:27.604 18:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:27.604 18:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.604 18:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.604 18:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:27.604 18:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.604 18:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:27.862 18:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.862 18:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:27.862 18:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.862 18:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:28.120 18:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.120 18:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:28.120 18:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.120 18:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:28.378 18:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.378 18:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:28.378 18:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:28.636 18:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:28.636 18:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:30.009 18:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:30.009 18:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:30.009 18:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.009 18:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:30.009 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.009 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:30.009 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.009 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:30.268 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:30.268 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:30.268 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.268 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:30.268 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.268 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:30.268 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:30.268 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.526 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.526 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:30.526 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.526 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:30.784 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.784 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:30.784 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.784 18:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:31.043 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:31.043 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 531831 00:25:31.043 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 531831 ']' 00:25:31.043 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 531831 00:25:31.043 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:31.043 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:31.043 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 531831 00:25:31.043 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:31.043 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:31.043 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 531831' 00:25:31.043 killing process with pid 531831 00:25:31.043 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 531831 00:25:31.043 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 531831 00:25:31.043 { 00:25:31.043 "results": [ 00:25:31.043 { 00:25:31.043 "job": "Nvme0n1", 00:25:31.043 "core_mask": "0x4", 00:25:31.043 "workload": "verify", 00:25:31.043 "status": "terminated", 00:25:31.043 "verify_range": { 00:25:31.043 "start": 0, 00:25:31.043 "length": 16384 00:25:31.043 }, 00:25:31.043 "queue_depth": 128, 00:25:31.043 "io_size": 4096, 00:25:31.043 "runtime": 28.740308, 00:25:31.043 "iops": 10612.238393548183, 00:25:31.043 "mibps": 41.45405622479759, 00:25:31.043 "io_failed": 0, 00:25:31.043 "io_timeout": 0, 00:25:31.043 "avg_latency_us": 12041.67163576961, 00:25:31.043 "min_latency_us": 1513.5695238095238, 00:25:31.043 "max_latency_us": 3083812.083809524 00:25:31.043 } 00:25:31.043 ], 00:25:31.043 "core_count": 1 00:25:31.043 } 00:25:31.327 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 531831 00:25:31.327 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:31.327 [2024-10-08 18:32:53.632507] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:25:31.327 [2024-10-08 18:32:53.632558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531831 ] 00:25:31.327 [2024-10-08 18:32:53.695947] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.327 [2024-10-08 18:32:53.768182] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.327 Running I/O for 90 seconds... 00:25:31.327 11361.00 IOPS, 44.38 MiB/s [2024-10-08T16:33:24.650Z] 11438.50 IOPS, 44.68 MiB/s [2024-10-08T16:33:24.650Z] 11415.33 IOPS, 44.59 MiB/s [2024-10-08T16:33:24.650Z] 11499.25 IOPS, 44.92 MiB/s [2024-10-08T16:33:24.650Z] 11493.00 IOPS, 44.89 MiB/s [2024-10-08T16:33:24.650Z] 11468.83 IOPS, 44.80 MiB/s [2024-10-08T16:33:24.650Z] 11442.43 IOPS, 44.70 MiB/s [2024-10-08T16:33:24.650Z] 11423.00 IOPS, 44.62 MiB/s [2024-10-08T16:33:24.650Z] 11426.33 IOPS, 44.63 MiB/s [2024-10-08T16:33:24.650Z] 11436.90 IOPS, 44.68 MiB/s [2024-10-08T16:33:24.650Z] 11414.45 IOPS, 44.59 MiB/s [2024-10-08T16:33:24.650Z] 11425.17 IOPS, 44.63 MiB/s [2024-10-08T16:33:24.650Z] [2024-10-08 18:33:08.287389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.327 [2024-10-08 18:33:08.287430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.287985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.287997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.288004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.288017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.288024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.288036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.288046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.288058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.288065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.288077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.288084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.288096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.288103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.288115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.288122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.327 [2024-10-08 18:33:08.288134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.327 [2024-10-08 18:33:08.288141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.288160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.288179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.288199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.288218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.288238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.288257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.288277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.288298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.288317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.288336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.288356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.288380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.288400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.288805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.288827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.328 [2024-10-08 18:33:08.288847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.328 [2024-10-08 18:33:08.288866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.328 [2024-10-08 18:33:08.288886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.328 [2024-10-08 18:33:08.288905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.328 [2024-10-08 18:33:08.288928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.328 [2024-10-08 18:33:08.288948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.328 [2024-10-08 18:33:08.288967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.288980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.288987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.328 [2024-10-08 18:33:08.289327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.328 [2024-10-08 18:33:08.289334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.289702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.289709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.290112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.329 [2024-10-08 18:33:08.290134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.329 [2024-10-08 18:33:08.290528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.329 [2024-10-08 18:33:08.290536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.330 [2024-10-08 18:33:08.290555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.330 [2024-10-08 18:33:08.290575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.330 [2024-10-08 18:33:08.290594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.330 [2024-10-08 18:33:08.290613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.330 [2024-10-08 18:33:08.290634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.330 [2024-10-08 18:33:08.290655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.330 [2024-10-08 18:33:08.290675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.330 [2024-10-08 18:33:08.290694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.330 [2024-10-08 18:33:08.290713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.330 [2024-10-08 18:33:08.290733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.330 [2024-10-08 18:33:08.290757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.330 [2024-10-08 18:33:08.290777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.290796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.290816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.290835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.290854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.290874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.290893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.290912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.330 [2024-10-08 18:33:08.290932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.290952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.290971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.290983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.290993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.330 [2024-10-08 18:33:08.291720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.330 [2024-10-08 18:33:08.291727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.291739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.291746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.291759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.291766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.291779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.291786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.291798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.291805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.291817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.291824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.291836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.291843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.291857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.291864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.291877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.291884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.291896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.291903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.291916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.291923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.291935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.291942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.291955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.291962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.291974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.291981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.291993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.292000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.292012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.292019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.292031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.292038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.292050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.292059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.292071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.292078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.292092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.292099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.292111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.292118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.292130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.331 [2024-10-08 18:33:08.292137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.292149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.331 [2024-10-08 18:33:08.292157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.292168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.331 [2024-10-08 18:33:08.292175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.292188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.331 [2024-10-08 18:33:08.292195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.292207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.331 [2024-10-08 18:33:08.292214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.303112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.331 [2024-10-08 18:33:08.303126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.303143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.331 [2024-10-08 18:33:08.303153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.303170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.303179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.303196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.303205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.303222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.303232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.303248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.331 [2024-10-08 18:33:08.303261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.331 [2024-10-08 18:33:08.303277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.303286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.303303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.303314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.303331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.303340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.303931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.303949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.303968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.303978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.303995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.332 [2024-10-08 18:33:08.304795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.332 [2024-10-08 18:33:08.304823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.332 [2024-10-08 18:33:08.304848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.332 [2024-10-08 18:33:08.304874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.332 [2024-10-08 18:33:08.304900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.332 [2024-10-08 18:33:08.304925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.332 [2024-10-08 18:33:08.304942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.304951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.304967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.304977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.304993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.333 [2024-10-08 18:33:08.305668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.333 [2024-10-08 18:33:08.305693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.333 [2024-10-08 18:33:08.305719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.333 [2024-10-08 18:33:08.305745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.333 [2024-10-08 18:33:08.305774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.333 [2024-10-08 18:33:08.305800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.333 [2024-10-08 18:33:08.305826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.333 [2024-10-08 18:33:08.305852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.333 [2024-10-08 18:33:08.305878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.333 [2024-10-08 18:33:08.305904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.333 [2024-10-08 18:33:08.305930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.333 [2024-10-08 18:33:08.305956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.305973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.333 [2024-10-08 18:33:08.305983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.306000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.333 [2024-10-08 18:33:08.306009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.333 [2024-10-08 18:33:08.306026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.333 [2024-10-08 18:33:08.306036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.306870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.306887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.306906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.306916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.306936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.306945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.306962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.306971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.306988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.306998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.334 [2024-10-08 18:33:08.307756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.334 [2024-10-08 18:33:08.307782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.334 [2024-10-08 18:33:08.307808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.334 [2024-10-08 18:33:08.307834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.334 [2024-10-08 18:33:08.307860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.334 [2024-10-08 18:33:08.307890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.334 [2024-10-08 18:33:08.307916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.334 [2024-10-08 18:33:08.307933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.334 [2024-10-08 18:33:08.307942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.307964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.307974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.307990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.308017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.308043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.308068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.308095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.308649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.308684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.308710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.308737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.308766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.308792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.308817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.308843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.308869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.308895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.308922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.308948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.308976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.308985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.309002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.309011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.309028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.309037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.309054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.309063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.309082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.309091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.309108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.309117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.309134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.309143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.309160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.309170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.309186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.309195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.309212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.309221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.309238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.309247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.309264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.309273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.309290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.309299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.309315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.309325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.309342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.309352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.309368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.309383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.309402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.335 [2024-10-08 18:33:08.309412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.335 [2024-10-08 18:33:08.309428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.336 [2024-10-08 18:33:08.309438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.336 [2024-10-08 18:33:08.309464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.336 [2024-10-08 18:33:08.309490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.336 [2024-10-08 18:33:08.309516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.336 [2024-10-08 18:33:08.309541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.309567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.309593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.309619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.309645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.309671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.309696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.309724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.309752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.309779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.309804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.309830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.309856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.309881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.309907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.309933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.309958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.309975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.309984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.310001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.310009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.310026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.310037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.310054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.310063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.310079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.310089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.310105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.310115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.310131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.310141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.310157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.310167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.310184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.310194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.310210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.310219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.310236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.310245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.310261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.310270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.310287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.310296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.310313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.310322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.310338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.310350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.310366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.336 [2024-10-08 18:33:08.315712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.315732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.336 [2024-10-08 18:33:08.315742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.315758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.336 [2024-10-08 18:33:08.315767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.315783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.336 [2024-10-08 18:33:08.315792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.315808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.336 [2024-10-08 18:33:08.315817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.336 [2024-10-08 18:33:08.315833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.336 [2024-10-08 18:33:08.315842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.316609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.316628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.316646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.316656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.316672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.337 [2024-10-08 18:33:08.316682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.316699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.316708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.316724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.316732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.316748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.316757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.316776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.316785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.316801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.316810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.316826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.316835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.316851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.316860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.316875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.316884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.316900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.316909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.316925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.316934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.316950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.316959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.316976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.316985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.337 [2024-10-08 18:33:08.317646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.337 [2024-10-08 18:33:08.317654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.317670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.317679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.317696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.317707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.317722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.317732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.317748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.338 [2024-10-08 18:33:08.317757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.317773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.338 [2024-10-08 18:33:08.317782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.317797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.338 [2024-10-08 18:33:08.317806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.317822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.338 [2024-10-08 18:33:08.317832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.317848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.338 [2024-10-08 18:33:08.317857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.317873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.338 [2024-10-08 18:33:08.317882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.317898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.338 [2024-10-08 18:33:08.317908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.317924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.317933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.317949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.317958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.317974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.317984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.318000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.318011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.318029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.318038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.318640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.318655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.318673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.318682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.318698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.318708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.318723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.318732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.318748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.318757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.318773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.318782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.318798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.318807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.318823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.318832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.318848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.318857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.318873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.318882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.318898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.318907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.318926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.318936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.318952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.318961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.318977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.318986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.319001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.319010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.319026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.319035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.319051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.319060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.319076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.319085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.319101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.319111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.319128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.319139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.319155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.319165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.319182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.319193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.319211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.319221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.319240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.319250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.319266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.319275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.338 [2024-10-08 18:33:08.319292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.338 [2024-10-08 18:33:08.319301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.339 [2024-10-08 18:33:08.319326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.339 [2024-10-08 18:33:08.319351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.339 [2024-10-08 18:33:08.319382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.339 [2024-10-08 18:33:08.319408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.339 [2024-10-08 18:33:08.319434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.339 [2024-10-08 18:33:08.319460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.339 [2024-10-08 18:33:08.319485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.339 [2024-10-08 18:33:08.319512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.339 [2024-10-08 18:33:08.319538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.319983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.319992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.320007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.320016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.320032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.320041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.320057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.320066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.320081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.320090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.320106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.320115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.320131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.320140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.320156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.320166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.320182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.320191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.320207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.320216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.339 [2024-10-08 18:33:08.320232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.339 [2024-10-08 18:33:08.320241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.320257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.340 [2024-10-08 18:33:08.320265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.320281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.340 [2024-10-08 18:33:08.320290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.320305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.340 [2024-10-08 18:33:08.320314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.320330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.340 [2024-10-08 18:33:08.320339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.320355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.320363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.320383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.320392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.320408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.320417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.320433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.320442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.340 [2024-10-08 18:33:08.321251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.321977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.340 [2024-10-08 18:33:08.321986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.340 [2024-10-08 18:33:08.322002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.322011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.322035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.322060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.322085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.322112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.322137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.322162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.322186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.322211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.322236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.322261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.322285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.341 [2024-10-08 18:33:08.322310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.341 [2024-10-08 18:33:08.322352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.341 [2024-10-08 18:33:08.322386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.341 [2024-10-08 18:33:08.322415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.341 [2024-10-08 18:33:08.322445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.341 [2024-10-08 18:33:08.322473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.341 [2024-10-08 18:33:08.322501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.322530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.322558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.322586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.322605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.322614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.341 [2024-10-08 18:33:08.323785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.341 [2024-10-08 18:33:08.323805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.323816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.323833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.323844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.323861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.323871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.323889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.323899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.323917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.323927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.323946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.323956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.323974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.323984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.324012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.324040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.324067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.324096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.324124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.324154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.324182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.324210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.324238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.324266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.342 [2024-10-08 18:33:08.324294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.342 [2024-10-08 18:33:08.324937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.342 [2024-10-08 18:33:08.324947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.324965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.343 [2024-10-08 18:33:08.324975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.324995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.343 [2024-10-08 18:33:08.325005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.343 [2024-10-08 18:33:08.325033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.343 [2024-10-08 18:33:08.325061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.343 [2024-10-08 18:33:08.325088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.343 [2024-10-08 18:33:08.325116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.343 [2024-10-08 18:33:08.325144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.343 [2024-10-08 18:33:08.325172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.343 [2024-10-08 18:33:08.325202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.325230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.325258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.325286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.325314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.325342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.325370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.325405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.343 [2024-10-08 18:33:08.325434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.325463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.325491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.325509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.325519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.326975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.326993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.343 [2024-10-08 18:33:08.327004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.343 [2024-10-08 18:33:08.327022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.344 [2024-10-08 18:33:08.327523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.344 [2024-10-08 18:33:08.327552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.344 [2024-10-08 18:33:08.327581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.344 [2024-10-08 18:33:08.327609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.344 [2024-10-08 18:33:08.327638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.344 [2024-10-08 18:33:08.327665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.344 [2024-10-08 18:33:08.327693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.327769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.327779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.328384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.328401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.328424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.328434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.328452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.328463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.328481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.328491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.328510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.328520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.328538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.328548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.328566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.328576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.328594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.328604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.344 [2024-10-08 18:33:08.328622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.344 [2024-10-08 18:33:08.328633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.328651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.328662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.328681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.328691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.328709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.328719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.328737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.328747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.328767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.328778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.328796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.328806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.328824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.328834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.328852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.328863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.328881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.328891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.328910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.328920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.328939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.328949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.328968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.328978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.328996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.329007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.329036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.329066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.329096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.329127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.329156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.329185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.329215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.329242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.329271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.329299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.329327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.329355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.329389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.329417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.345 [2024-10-08 18:33:08.329446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.345 [2024-10-08 18:33:08.329475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.345 [2024-10-08 18:33:08.329504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.345 [2024-10-08 18:33:08.329532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.345 [2024-10-08 18:33:08.329560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.345 [2024-10-08 18:33:08.329588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.345 [2024-10-08 18:33:08.329616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.345 [2024-10-08 18:33:08.329644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.345 [2024-10-08 18:33:08.329673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.345 [2024-10-08 18:33:08.329701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.345 [2024-10-08 18:33:08.329729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.345 [2024-10-08 18:33:08.329747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.329757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.329775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.329786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.329804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.329814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.329834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.329845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.329864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.329874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.329892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.329902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.329920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.329930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.329948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.329958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.329976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.329986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.330015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.330043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.330072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.330100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.330128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.330157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.330189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.330218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.330245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.330273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.330302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.330329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.330357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.330391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.330419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.330447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.330477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.330505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.330538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.330567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.346 [2024-10-08 18:33:08.330597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.330627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.330646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.330657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.331564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.331584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.331605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.331615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.331633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.331644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.331662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.331672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.331690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.331700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.331718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.331730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.331747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.331759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.331776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.331791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.331809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.331820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.346 [2024-10-08 18:33:08.331838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.346 [2024-10-08 18:33:08.331848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.331866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.331878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.331896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.331907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.331925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.331935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.331953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.331964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.331982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.331992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.347 [2024-10-08 18:33:08.332607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.347 [2024-10-08 18:33:08.332629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.347 [2024-10-08 18:33:08.332648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.347 [2024-10-08 18:33:08.332667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.347 [2024-10-08 18:33:08.332686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.347 [2024-10-08 18:33:08.332707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.347 [2024-10-08 18:33:08.332726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.332758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.332765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.333166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.333177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.333191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.333198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.333210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.347 [2024-10-08 18:33:08.333217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.347 [2024-10-08 18:33:08.333229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.348 [2024-10-08 18:33:08.333908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.348 [2024-10-08 18:33:08.333928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.348 [2024-10-08 18:33:08.333940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.333947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.333959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.333966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.333978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.333985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.333997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.334533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.334545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.349 [2024-10-08 18:33:08.334553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.335104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.349 [2024-10-08 18:33:08.335117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.335131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.349 [2024-10-08 18:33:08.335138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.335151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.349 [2024-10-08 18:33:08.335159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.335172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.349 [2024-10-08 18:33:08.335179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.335191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.349 [2024-10-08 18:33:08.335198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.335210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.349 [2024-10-08 18:33:08.335219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.335232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.349 [2024-10-08 18:33:08.335238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.335251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.349 [2024-10-08 18:33:08.335257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.335270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.349 [2024-10-08 18:33:08.335277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.349 [2024-10-08 18:33:08.335289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.349 [2024-10-08 18:33:08.335296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.335982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.335994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.336001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.336013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.336020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.336034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.350 [2024-10-08 18:33:08.336041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.336053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.350 [2024-10-08 18:33:08.336060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.336072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.350 [2024-10-08 18:33:08.336079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.350 [2024-10-08 18:33:08.336091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.350 [2024-10-08 18:33:08.336098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.351 [2024-10-08 18:33:08.336117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.351 [2024-10-08 18:33:08.336137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.351 [2024-10-08 18:33:08.336156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.351 [2024-10-08 18:33:08.336175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.336195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.336715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.336737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.336757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.336780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.336800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.336820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.336839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.336859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.336878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.336897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.336916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.336935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.336954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.336973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.336985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.336992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.351 [2024-10-08 18:33:08.337699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.351 [2024-10-08 18:33:08.337712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.352 [2024-10-08 18:33:08.337719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.337731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.352 [2024-10-08 18:33:08.337738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.337750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.352 [2024-10-08 18:33:08.337756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.337769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.337776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.337790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.337797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.337809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.337817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.337829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.337835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.337848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.337855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.337867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.337876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.337888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.337895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.337907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.337914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.337927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.337933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.337946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.337952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.337964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.337971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.337984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.337990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.352 [2024-10-08 18:33:08.338381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.352 [2024-10-08 18:33:08.338401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.352 [2024-10-08 18:33:08.338413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.352 [2024-10-08 18:33:08.338420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.338432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.338439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.338452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.338458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.338470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.338477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.338489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.338498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.338510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.338517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.338530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.353 [2024-10-08 18:33:08.338537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.338549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.338557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.338570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.338576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.338589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.338596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.338607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.338614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.338626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.338633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.338645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.338652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.338664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.338671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.338683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.338690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.353 [2024-10-08 18:33:08.339604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.353 [2024-10-08 18:33:08.339617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.339624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.339644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.339664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.339683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.339701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.339720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.339739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.339758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.354 [2024-10-08 18:33:08.339777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.354 [2024-10-08 18:33:08.339797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.354 [2024-10-08 18:33:08.339816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.354 [2024-10-08 18:33:08.339835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.354 [2024-10-08 18:33:08.339855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.354 [2024-10-08 18:33:08.339876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.354 [2024-10-08 18:33:08.339895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.339914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.339933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.339955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.339976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.339988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.339995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.354 [2024-10-08 18:33:08.340798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.354 [2024-10-08 18:33:08.340805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.340818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.355 [2024-10-08 18:33:08.340825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.340839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.355 [2024-10-08 18:33:08.340847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.340859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.355 [2024-10-08 18:33:08.340866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.340878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.355 [2024-10-08 18:33:08.340885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.340898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.355 [2024-10-08 18:33:08.340906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.340918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.355 [2024-10-08 18:33:08.340925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.340938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.355 [2024-10-08 18:33:08.340945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.340957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.355 [2024-10-08 18:33:08.340965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.340979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.355 [2024-10-08 18:33:08.340986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.355 [2024-10-08 18:33:08.341008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.355 [2024-10-08 18:33:08.341030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.355 [2024-10-08 18:33:08.341050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.355 [2024-10-08 18:33:08.341074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.355 [2024-10-08 18:33:08.341373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.355 [2024-10-08 18:33:08.341878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.355 [2024-10-08 18:33:08.341890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.356 [2024-10-08 18:33:08.341897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.341911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.356 [2024-10-08 18:33:08.341918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.341932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.356 [2024-10-08 18:33:08.341940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.341953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.356 [2024-10-08 18:33:08.341960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.341972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.356 [2024-10-08 18:33:08.341979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.341992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.356 [2024-10-08 18:33:08.341999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.356 [2024-10-08 18:33:08.342020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.356 [2024-10-08 18:33:08.342173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.356 [2024-10-08 18:33:08.342861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.356 [2024-10-08 18:33:08.342873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.342880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.342892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.342899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.342911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.342918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.342930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.342937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.342949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.342956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.342968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.342975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.342987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.342994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.357 [2024-10-08 18:33:08.343287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.357 [2024-10-08 18:33:08.343306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.357 [2024-10-08 18:33:08.343325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.357 [2024-10-08 18:33:08.343344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.357 [2024-10-08 18:33:08.343364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.357 [2024-10-08 18:33:08.343388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.357 [2024-10-08 18:33:08.343407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.343542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.343998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.344010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.344024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.344032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.344044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.344051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.344063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.357 [2024-10-08 18:33:08.344070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.357 [2024-10-08 18:33:08.344083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.358 [2024-10-08 18:33:08.344911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.358 [2024-10-08 18:33:08.344931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.358 [2024-10-08 18:33:08.344953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.358 [2024-10-08 18:33:08.344973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.344985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.358 [2024-10-08 18:33:08.344993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.345006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.358 [2024-10-08 18:33:08.345013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.345025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.358 [2024-10-08 18:33:08.345032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.345044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.358 [2024-10-08 18:33:08.345051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.345065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.358 [2024-10-08 18:33:08.345072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.345085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.358 [2024-10-08 18:33:08.345092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.345104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.358 [2024-10-08 18:33:08.345111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.345123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.358 [2024-10-08 18:33:08.345130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.345142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.358 [2024-10-08 18:33:08.345149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.358 [2024-10-08 18:33:08.345162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.345569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.345590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.345610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.345630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.345650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.345671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.345697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.359 [2024-10-08 18:33:08.345717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.345738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.345759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.345778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.345798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.345817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.345836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.345849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.345856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.346217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.346228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.346241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.346250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.346262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.346270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.346286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.346293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.359 [2024-10-08 18:33:08.346305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.359 [2024-10-08 18:33:08.346312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.346848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.346855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.347135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.347145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.347159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.347166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.347178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.360 [2024-10-08 18:33:08.347185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.347198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.360 [2024-10-08 18:33:08.347206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.347218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.360 [2024-10-08 18:33:08.347226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.347238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.360 [2024-10-08 18:33:08.347245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.347258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.360 [2024-10-08 18:33:08.347265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.347279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.360 [2024-10-08 18:33:08.347288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.347300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.360 [2024-10-08 18:33:08.347309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.347323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.347329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.347342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.347350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.347363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.347370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.347387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.347394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.360 [2024-10-08 18:33:08.347408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.360 [2024-10-08 18:33:08.347415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.347903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.347910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.348242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.348253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.348266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.348273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.348286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.348293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.348305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.348312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.348324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.348331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.348343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.348352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.348364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.348371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.348389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.348396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.348486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.348495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.348508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.361 [2024-10-08 18:33:08.348515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.348528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.361 [2024-10-08 18:33:08.348536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.361 [2024-10-08 18:33:08.348548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.348991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.348998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.349017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.349036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.349055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.349074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.349093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.349111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.349131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.362 [2024-10-08 18:33:08.349153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.362 [2024-10-08 18:33:08.349172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.362 [2024-10-08 18:33:08.349191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.362 [2024-10-08 18:33:08.349210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.362 [2024-10-08 18:33:08.349229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.362 [2024-10-08 18:33:08.349248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.362 [2024-10-08 18:33:08.349393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.362 [2024-10-08 18:33:08.349425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.362 [2024-10-08 18:33:08.349446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.362 [2024-10-08 18:33:08.349468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.362 [2024-10-08 18:33:08.349483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.349924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.349998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.363 [2024-10-08 18:33:08.350442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.363 [2024-10-08 18:33:08.350467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.363 [2024-10-08 18:33:08.350490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.363 [2024-10-08 18:33:08.350507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.363 [2024-10-08 18:33:08.350514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.364 [2024-10-08 18:33:08.350537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.364 [2024-10-08 18:33:08.350561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.364 [2024-10-08 18:33:08.350584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.364 [2024-10-08 18:33:08.350608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.350632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.350655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.350679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.350703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.350727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.350753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.350776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.350799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.350823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.350846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.350870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.350893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.350916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.350933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.350940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.364 [2024-10-08 18:33:08.351600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.364 [2024-10-08 18:33:08.351607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:08.351625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.365 [2024-10-08 18:33:08.351633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.365 11260.85 IOPS, 43.99 MiB/s [2024-10-08T16:33:24.688Z] 10456.50 IOPS, 40.85 MiB/s [2024-10-08T16:33:24.688Z] 9759.40 IOPS, 38.12 MiB/s [2024-10-08T16:33:24.688Z] 9249.38 IOPS, 36.13 MiB/s [2024-10-08T16:33:24.688Z] 9392.06 IOPS, 36.69 MiB/s [2024-10-08T16:33:24.688Z] 9505.94 IOPS, 37.13 MiB/s [2024-10-08T16:33:24.688Z] 9682.47 IOPS, 37.82 MiB/s [2024-10-08T16:33:24.688Z] 9885.45 IOPS, 38.62 MiB/s [2024-10-08T16:33:24.688Z] 10044.33 IOPS, 39.24 MiB/s [2024-10-08T16:33:24.688Z] 10120.00 IOPS, 39.53 MiB/s [2024-10-08T16:33:24.688Z] 10179.35 IOPS, 39.76 MiB/s [2024-10-08T16:33:24.688Z] 10250.92 IOPS, 40.04 MiB/s [2024-10-08T16:33:24.688Z] 10370.40 IOPS, 40.51 MiB/s [2024-10-08T16:33:24.688Z] 10484.88 IOPS, 40.96 MiB/s [2024-10-08T16:33:24.688Z] [2024-10-08 18:33:21.905130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.365 [2024-10-08 18:33:21.905174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.365 [2024-10-08 18:33:21.905233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.365 [2024-10-08 18:33:21.905261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.365 [2024-10-08 18:33:21.905280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.905983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.905996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.906002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.906015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.906022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.365 [2024-10-08 18:33:21.906034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.365 [2024-10-08 18:33:21.906041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.366 [2024-10-08 18:33:21.906063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.366 [2024-10-08 18:33:21.906082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.366 [2024-10-08 18:33:21.906102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.366 [2024-10-08 18:33:21.906121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.366 [2024-10-08 18:33:21.906141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.366 [2024-10-08 18:33:21.906162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.366 [2024-10-08 18:33:21.906709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.366 [2024-10-08 18:33:21.906732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.366 [2024-10-08 18:33:21.906752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.366 [2024-10-08 18:33:21.906772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.366 [2024-10-08 18:33:21.906791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.366 [2024-10-08 18:33:21.906810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.366 [2024-10-08 18:33:21.906830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.366 [2024-10-08 18:33:21.906849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.366 [2024-10-08 18:33:21.906869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:124912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.366 [2024-10-08 18:33:21.906888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.366 [2024-10-08 18:33:21.906909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.366 [2024-10-08 18:33:21.906929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.366 [2024-10-08 18:33:21.906944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.366 [2024-10-08 18:33:21.906951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.366 10551.30 IOPS, 41.22 MiB/s [2024-10-08T16:33:24.689Z] 10584.82 IOPS, 41.35 MiB/s [2024-10-08T16:33:24.689Z] Received shutdown signal, test time was about 28.740951 seconds 00:25:31.366 00:25:31.366 Latency(us) 00:25:31.366 [2024-10-08T16:33:24.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.366 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:31.366 Verification LBA range: start 0x0 length 0x4000 00:25:31.366 Nvme0n1 : 28.74 10612.24 41.45 0.00 0.00 12041.67 1513.57 3083812.08 00:25:31.366 [2024-10-08T16:33:24.689Z] =================================================================================================================== 00:25:31.366 [2024-10-08T16:33:24.689Z] Total : 10612.24 41.45 0.00 0.00 12041.67 1513.57 3083812.08 00:25:31.366 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:31.625 rmmod nvme_tcp 00:25:31.625 rmmod nvme_fabrics 00:25:31.625 rmmod nvme_keyring 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 531407 ']' 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 531407 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 531407 ']' 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 531407 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 531407 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 531407' 00:25:31.625 killing process with pid 531407 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 531407 00:25:31.625 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 531407 00:25:31.884 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:31.884 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:31.884 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:31.884 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:31.884 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:25:31.884 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:31.884 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:25:31.884 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:31.884 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:31.884 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.884 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.884 18:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.787 18:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:33.787 00:25:33.787 real 0m41.703s 00:25:33.787 user 1m52.929s 00:25:33.787 sys 0m11.490s 00:25:33.787 18:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:33.787 18:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:33.787 ************************************ 00:25:33.787 END TEST nvmf_host_multipath_status 00:25:33.787 ************************************ 00:25:33.787 18:33:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:33.787 18:33:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:33.787 18:33:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:33.787 18:33:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.047 ************************************ 00:25:34.047 START TEST nvmf_discovery_remove_ifc 00:25:34.047 ************************************ 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:34.047 * Looking for test storage... 00:25:34.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:34.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.047 --rc genhtml_branch_coverage=1 00:25:34.047 --rc genhtml_function_coverage=1 00:25:34.047 --rc genhtml_legend=1 00:25:34.047 --rc geninfo_all_blocks=1 00:25:34.047 --rc geninfo_unexecuted_blocks=1 00:25:34.047 00:25:34.047 ' 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:34.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.047 --rc genhtml_branch_coverage=1 00:25:34.047 --rc genhtml_function_coverage=1 00:25:34.047 --rc genhtml_legend=1 00:25:34.047 --rc geninfo_all_blocks=1 00:25:34.047 --rc geninfo_unexecuted_blocks=1 00:25:34.047 00:25:34.047 ' 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:34.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.047 --rc genhtml_branch_coverage=1 00:25:34.047 --rc genhtml_function_coverage=1 00:25:34.047 --rc genhtml_legend=1 00:25:34.047 --rc geninfo_all_blocks=1 00:25:34.047 --rc geninfo_unexecuted_blocks=1 00:25:34.047 00:25:34.047 ' 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:34.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.047 --rc genhtml_branch_coverage=1 00:25:34.047 --rc genhtml_function_coverage=1 00:25:34.047 --rc genhtml_legend=1 00:25:34.047 --rc geninfo_all_blocks=1 00:25:34.047 --rc geninfo_unexecuted_blocks=1 00:25:34.047 00:25:34.047 ' 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.047 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:34.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:34.048 18:33:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:40.614 18:33:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.614 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:40.615 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:40.615 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:40.615 Found net devices under 0000:86:00.0: cvl_0_0 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:40.615 Found net devices under 0000:86:00.1: cvl_0_1 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:40.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:25:40.615 00:25:40.615 --- 10.0.0.2 ping statistics --- 00:25:40.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.615 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:40.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:25:40.615 00:25:40.615 --- 10.0.0.1 ping statistics --- 00:25:40.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.615 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=541109 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 541109 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 541109 ']' 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:40.615 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:40.615 [2024-10-08 18:33:33.368085] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:25:40.615 [2024-10-08 18:33:33.368132] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.615 [2024-10-08 18:33:33.423804] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.615 [2024-10-08 18:33:33.499467] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.615 [2024-10-08 18:33:33.499507] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.616 [2024-10-08 18:33:33.499514] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.616 [2024-10-08 18:33:33.499523] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.616 [2024-10-08 18:33:33.499529] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.616 [2024-10-08 18:33:33.500108] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:40.616 [2024-10-08 18:33:33.651140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.616 [2024-10-08 18:33:33.659324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:40.616 null0 00:25:40.616 [2024-10-08 18:33:33.691293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=541133 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 541133 /tmp/host.sock 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 541133 ']' 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:40.616 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:40.616 18:33:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:40.616 [2024-10-08 18:33:33.762013] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:25:40.616 [2024-10-08 18:33:33.762055] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid541133 ] 00:25:40.616 [2024-10-08 18:33:33.827830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.616 [2024-10-08 18:33:33.906430] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.549 18:33:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:41.549 18:33:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:41.549 18:33:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:41.549 18:33:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:41.549 18:33:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.549 18:33:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.549 18:33:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.549 18:33:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:41.549 18:33:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.549 18:33:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.549 18:33:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.549 18:33:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:41.549 18:33:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.549 18:33:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.485 [2024-10-08 18:33:35.729564] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:42.485 [2024-10-08 18:33:35.729583] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:42.485 [2024-10-08 18:33:35.729601] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:42.743 [2024-10-08 18:33:35.815862] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:42.743 [2024-10-08 18:33:35.872241] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:42.743 [2024-10-08 18:33:35.872285] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:42.743 [2024-10-08 18:33:35.872304] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:42.743 [2024-10-08 18:33:35.872316] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:42.743 [2024-10-08 18:33:35.872334] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:42.743 18:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.743 18:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:42.743 18:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:42.743 [2024-10-08 18:33:35.878135] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf60a50 was disconnected and freed. delete nvme_qpair. 00:25:42.743 18:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.743 18:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:42.743 18:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:42.743 18:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.743 18:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:42.743 18:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.743 18:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.743 18:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:42.743 18:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:42.743 18:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:42.743 18:33:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:42.744 18:33:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:42.744 18:33:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.744 18:33:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:42.744 18:33:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.744 18:33:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:42.744 18:33:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.744 18:33:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:42.744 18:33:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.744 18:33:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:42.744 18:33:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:44.120 18:33:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:44.120 18:33:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.120 18:33:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:44.120 18:33:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:44.120 18:33:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.120 18:33:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:44.120 18:33:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:44.120 18:33:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.120 18:33:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:44.120 18:33:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:45.054 18:33:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:45.054 18:33:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.054 18:33:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:45.054 18:33:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.054 18:33:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:45.054 18:33:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:45.054 18:33:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:45.054 18:33:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.054 18:33:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:45.054 18:33:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:45.990 18:33:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:45.990 18:33:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.990 18:33:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:45.990 18:33:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.990 18:33:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:45.990 18:33:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:45.990 18:33:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:45.990 18:33:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.990 18:33:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:45.990 18:33:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:46.926 18:33:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:46.926 18:33:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.926 18:33:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:46.926 18:33:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.926 18:33:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:46.926 18:33:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:46.926 18:33:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:46.926 18:33:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.184 18:33:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:47.184 18:33:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:48.120 18:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:48.120 18:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:48.120 18:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:48.120 18:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.120 18:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:48.120 18:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.120 18:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:48.120 18:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.120 18:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:48.120 18:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:48.120 [2024-10-08 18:33:41.314028] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:48.120 [2024-10-08 18:33:41.314070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.120 [2024-10-08 18:33:41.314081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.120 [2024-10-08 18:33:41.314090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.120 [2024-10-08 18:33:41.314097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.120 [2024-10-08 18:33:41.314104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.120 [2024-10-08 18:33:41.314112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.120 [2024-10-08 18:33:41.314120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.120 [2024-10-08 18:33:41.314131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.120 [2024-10-08 18:33:41.314140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.120 [2024-10-08 18:33:41.314146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.120 [2024-10-08 18:33:41.314152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3d2e0 is same with the state(6) to be set 00:25:48.121 [2024-10-08 18:33:41.324050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf3d2e0 (9): Bad file descriptor 00:25:48.121 [2024-10-08 18:33:41.334088] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:49.056 18:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:49.056 18:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.056 18:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:49.056 18:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.056 18:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:49.056 18:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:49.056 18:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:49.056 [2024-10-08 18:33:42.368413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:49.056 [2024-10-08 18:33:42.368490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf3d2e0 with addr=10.0.0.2, port=4420 00:25:49.056 [2024-10-08 18:33:42.368522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3d2e0 is same with the state(6) to be set 00:25:49.056 [2024-10-08 18:33:42.368571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf3d2e0 (9): Bad file descriptor 00:25:49.056 [2024-10-08 18:33:42.369524] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:49.056 [2024-10-08 18:33:42.369585] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:49.056 [2024-10-08 18:33:42.369607] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:49.056 [2024-10-08 18:33:42.369631] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:49.056 [2024-10-08 18:33:42.369690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.056 [2024-10-08 18:33:42.369716] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:49.314 18:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.314 18:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:49.314 18:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:50.249 [2024-10-08 18:33:43.372208] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:50.249 [2024-10-08 18:33:43.372229] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:50.249 [2024-10-08 18:33:43.372236] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:50.249 [2024-10-08 18:33:43.372242] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:50.249 [2024-10-08 18:33:43.372253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.249 [2024-10-08 18:33:43.372271] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:50.249 [2024-10-08 18:33:43.372294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.249 [2024-10-08 18:33:43.372303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.249 [2024-10-08 18:33:43.372312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.249 [2024-10-08 18:33:43.372318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.249 [2024-10-08 18:33:43.372325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.249 [2024-10-08 18:33:43.372331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.249 [2024-10-08 18:33:43.372338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.249 [2024-10-08 18:33:43.372344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.249 [2024-10-08 18:33:43.372351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.249 [2024-10-08 18:33:43.372357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.249 [2024-10-08 18:33:43.372363] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:50.249 [2024-10-08 18:33:43.372839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2c9c0 (9): Bad file descriptor 00:25:50.249 [2024-10-08 18:33:43.373851] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:50.249 [2024-10-08 18:33:43.373861] nvme_ctrlr.c:1233:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:50.249 18:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:51.624 18:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:51.624 18:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.624 18:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:51.624 18:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.624 18:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:51.624 18:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.624 18:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:51.624 18:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.624 18:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:51.624 18:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:52.191 [2024-10-08 18:33:45.424524] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:52.191 [2024-10-08 18:33:45.424541] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:52.191 [2024-10-08 18:33:45.424555] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:52.191 [2024-10-08 18:33:45.512819] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:52.450 18:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:52.450 18:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.450 18:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:52.450 18:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.450 18:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:52.450 18:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:52.450 18:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:52.450 18:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.450 18:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:52.450 18:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:52.450 [2024-10-08 18:33:45.695501] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:52.450 [2024-10-08 18:33:45.695536] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:52.450 [2024-10-08 18:33:45.695552] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:52.450 [2024-10-08 18:33:45.695565] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:52.450 [2024-10-08 18:33:45.695572] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:52.450 [2024-10-08 18:33:45.703118] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf389f0 was disconnected and freed. delete nvme_qpair. 00:25:53.385 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:53.385 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.385 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:53.385 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.385 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:53.385 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.385 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:53.385 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 541133 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 541133 ']' 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 541133 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 541133 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 541133' 00:25:53.643 killing process with pid 541133 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 541133 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 541133 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:53.643 18:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:53.643 rmmod nvme_tcp 00:25:53.902 rmmod nvme_fabrics 00:25:53.902 rmmod nvme_keyring 00:25:53.902 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:53.902 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:53.902 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:53.902 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 541109 ']' 00:25:53.902 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 541109 00:25:53.902 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 541109 ']' 00:25:53.902 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 541109 00:25:53.902 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:25:53.902 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:53.902 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 541109 00:25:53.902 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:53.902 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:53.902 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 541109' 00:25:53.902 killing process with pid 541109 00:25:53.902 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 541109 00:25:53.902 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 541109 00:25:54.161 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:54.161 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:54.161 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:54.161 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:54.161 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:25:54.161 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:54.161 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:25:54.161 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:54.161 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:54.161 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.161 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.161 18:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.064 18:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:56.064 00:25:56.064 real 0m22.194s 00:25:56.064 user 0m28.120s 00:25:56.064 sys 0m6.006s 00:25:56.064 18:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:56.064 18:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:56.064 ************************************ 00:25:56.064 END TEST nvmf_discovery_remove_ifc 00:25:56.064 ************************************ 00:25:56.064 18:33:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:56.064 18:33:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:56.064 18:33:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:56.064 18:33:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.323 ************************************ 00:25:56.323 START TEST nvmf_identify_kernel_target 00:25:56.323 ************************************ 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:56.323 * Looking for test storage... 00:25:56.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:56.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.323 --rc genhtml_branch_coverage=1 00:25:56.323 --rc genhtml_function_coverage=1 00:25:56.323 --rc genhtml_legend=1 00:25:56.323 --rc geninfo_all_blocks=1 00:25:56.323 --rc geninfo_unexecuted_blocks=1 00:25:56.323 00:25:56.323 ' 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:56.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.323 --rc genhtml_branch_coverage=1 00:25:56.323 --rc genhtml_function_coverage=1 00:25:56.323 --rc genhtml_legend=1 00:25:56.323 --rc geninfo_all_blocks=1 00:25:56.323 --rc geninfo_unexecuted_blocks=1 00:25:56.323 00:25:56.323 ' 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:56.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.323 --rc genhtml_branch_coverage=1 00:25:56.323 --rc genhtml_function_coverage=1 00:25:56.323 --rc genhtml_legend=1 00:25:56.323 --rc geninfo_all_blocks=1 00:25:56.323 --rc geninfo_unexecuted_blocks=1 00:25:56.323 00:25:56.323 ' 00:25:56.323 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:56.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.323 --rc genhtml_branch_coverage=1 00:25:56.323 --rc genhtml_function_coverage=1 00:25:56.323 --rc genhtml_legend=1 00:25:56.324 --rc geninfo_all_blocks=1 00:25:56.324 --rc geninfo_unexecuted_blocks=1 00:25:56.324 00:25:56.324 ' 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:56.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:56.324 18:33:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:02.889 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.889 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:02.889 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:02.889 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:02.889 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:02.889 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:02.889 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:02.889 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:02.889 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:02.889 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:02.889 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:02.889 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:02.889 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:02.889 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:02.889 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:02.890 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:02.890 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:02.890 Found net devices under 0000:86:00.0: cvl_0_0 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:02.890 Found net devices under 0000:86:00.1: cvl_0_1 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:02.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:26:02.890 00:26:02.890 --- 10.0.0.2 ping statistics --- 00:26:02.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.890 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:26:02.890 00:26:02.890 --- 10.0.0.1 ping statistics --- 00:26:02.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.890 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:02.890 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:02.891 18:33:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:05.422 Waiting for block devices as requested 00:26:05.422 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:05.423 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:05.423 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:05.423 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:05.423 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:05.680 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:05.680 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:05.680 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:05.939 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:05.939 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:05.939 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:06.197 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:06.197 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:06.197 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:06.197 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:06.456 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:06.456 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:06.456 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:06.456 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:06.456 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:06.456 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:06.456 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:06.456 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:06.456 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:06.456 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:06.457 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:06.715 No valid GPT data, bailing 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:06.715 00:26:06.715 Discovery Log Number of Records 2, Generation counter 2 00:26:06.715 =====Discovery Log Entry 0====== 00:26:06.715 trtype: tcp 00:26:06.715 adrfam: ipv4 00:26:06.715 subtype: current discovery subsystem 00:26:06.715 treq: not specified, sq flow control disable supported 00:26:06.715 portid: 1 00:26:06.715 trsvcid: 4420 00:26:06.715 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:06.715 traddr: 10.0.0.1 00:26:06.715 eflags: none 00:26:06.715 sectype: none 00:26:06.715 =====Discovery Log Entry 1====== 00:26:06.715 trtype: tcp 00:26:06.715 adrfam: ipv4 00:26:06.715 subtype: nvme subsystem 00:26:06.715 treq: not specified, sq flow control disable supported 00:26:06.715 portid: 1 00:26:06.715 trsvcid: 4420 00:26:06.715 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:06.715 traddr: 10.0.0.1 00:26:06.715 eflags: none 00:26:06.715 sectype: none 00:26:06.715 18:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:06.715 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:06.715 ===================================================== 00:26:06.715 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:06.715 ===================================================== 00:26:06.715 Controller Capabilities/Features 00:26:06.715 ================================ 00:26:06.715 Vendor ID: 0000 00:26:06.715 Subsystem Vendor ID: 0000 00:26:06.715 Serial Number: 3859f8cba69856056b20 00:26:06.715 Model Number: Linux 00:26:06.715 Firmware Version: 6.8.9-20 00:26:06.715 Recommended Arb Burst: 0 00:26:06.715 IEEE OUI Identifier: 00 00 00 00:26:06.715 Multi-path I/O 00:26:06.715 May have multiple subsystem ports: No 00:26:06.715 May have multiple controllers: No 00:26:06.715 Associated with SR-IOV VF: No 00:26:06.715 Max Data Transfer Size: Unlimited 00:26:06.715 Max Number of Namespaces: 0 00:26:06.715 Max Number of I/O Queues: 1024 00:26:06.715 NVMe Specification Version (VS): 1.3 00:26:06.715 NVMe Specification Version (Identify): 1.3 00:26:06.715 Maximum Queue Entries: 1024 00:26:06.715 Contiguous Queues Required: No 00:26:06.715 Arbitration Mechanisms Supported 00:26:06.715 Weighted Round Robin: Not Supported 00:26:06.715 Vendor Specific: Not Supported 00:26:06.715 Reset Timeout: 7500 ms 00:26:06.715 Doorbell Stride: 4 bytes 00:26:06.715 NVM Subsystem Reset: Not Supported 00:26:06.715 Command Sets Supported 00:26:06.715 NVM Command Set: Supported 00:26:06.715 Boot Partition: Not Supported 00:26:06.715 Memory Page Size Minimum: 4096 bytes 00:26:06.715 Memory Page Size Maximum: 4096 bytes 00:26:06.715 Persistent Memory Region: Not Supported 00:26:06.715 Optional Asynchronous Events Supported 00:26:06.715 Namespace Attribute Notices: Not Supported 00:26:06.715 Firmware Activation Notices: Not Supported 00:26:06.715 ANA Change Notices: Not Supported 00:26:06.715 PLE Aggregate Log Change Notices: Not Supported 00:26:06.715 LBA Status Info Alert Notices: Not Supported 00:26:06.715 EGE Aggregate Log Change Notices: Not Supported 00:26:06.715 Normal NVM Subsystem Shutdown event: Not Supported 00:26:06.715 Zone Descriptor Change Notices: Not Supported 00:26:06.715 Discovery Log Change Notices: Supported 00:26:06.715 Controller Attributes 00:26:06.715 128-bit Host Identifier: Not Supported 00:26:06.715 Non-Operational Permissive Mode: Not Supported 00:26:06.715 NVM Sets: Not Supported 00:26:06.715 Read Recovery Levels: Not Supported 00:26:06.715 Endurance Groups: Not Supported 00:26:06.715 Predictable Latency Mode: Not Supported 00:26:06.715 Traffic Based Keep ALive: Not Supported 00:26:06.715 Namespace Granularity: Not Supported 00:26:06.715 SQ Associations: Not Supported 00:26:06.715 UUID List: Not Supported 00:26:06.715 Multi-Domain Subsystem: Not Supported 00:26:06.715 Fixed Capacity Management: Not Supported 00:26:06.715 Variable Capacity Management: Not Supported 00:26:06.715 Delete Endurance Group: Not Supported 00:26:06.715 Delete NVM Set: Not Supported 00:26:06.715 Extended LBA Formats Supported: Not Supported 00:26:06.715 Flexible Data Placement Supported: Not Supported 00:26:06.715 00:26:06.715 Controller Memory Buffer Support 00:26:06.715 ================================ 00:26:06.715 Supported: No 00:26:06.715 00:26:06.715 Persistent Memory Region Support 00:26:06.715 ================================ 00:26:06.715 Supported: No 00:26:06.715 00:26:06.715 Admin Command Set Attributes 00:26:06.715 ============================ 00:26:06.715 Security Send/Receive: Not Supported 00:26:06.715 Format NVM: Not Supported 00:26:06.715 Firmware Activate/Download: Not Supported 00:26:06.715 Namespace Management: Not Supported 00:26:06.715 Device Self-Test: Not Supported 00:26:06.715 Directives: Not Supported 00:26:06.715 NVMe-MI: Not Supported 00:26:06.715 Virtualization Management: Not Supported 00:26:06.715 Doorbell Buffer Config: Not Supported 00:26:06.715 Get LBA Status Capability: Not Supported 00:26:06.715 Command & Feature Lockdown Capability: Not Supported 00:26:06.715 Abort Command Limit: 1 00:26:06.715 Async Event Request Limit: 1 00:26:06.715 Number of Firmware Slots: N/A 00:26:06.715 Firmware Slot 1 Read-Only: N/A 00:26:06.715 Firmware Activation Without Reset: N/A 00:26:06.715 Multiple Update Detection Support: N/A 00:26:06.715 Firmware Update Granularity: No Information Provided 00:26:06.715 Per-Namespace SMART Log: No 00:26:06.715 Asymmetric Namespace Access Log Page: Not Supported 00:26:06.715 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:06.715 Command Effects Log Page: Not Supported 00:26:06.715 Get Log Page Extended Data: Supported 00:26:06.715 Telemetry Log Pages: Not Supported 00:26:06.715 Persistent Event Log Pages: Not Supported 00:26:06.715 Supported Log Pages Log Page: May Support 00:26:06.715 Commands Supported & Effects Log Page: Not Supported 00:26:06.715 Feature Identifiers & Effects Log Page:May Support 00:26:06.715 NVMe-MI Commands & Effects Log Page: May Support 00:26:06.715 Data Area 4 for Telemetry Log: Not Supported 00:26:06.715 Error Log Page Entries Supported: 1 00:26:06.715 Keep Alive: Not Supported 00:26:06.715 00:26:06.715 NVM Command Set Attributes 00:26:06.715 ========================== 00:26:06.715 Submission Queue Entry Size 00:26:06.715 Max: 1 00:26:06.715 Min: 1 00:26:06.715 Completion Queue Entry Size 00:26:06.715 Max: 1 00:26:06.715 Min: 1 00:26:06.715 Number of Namespaces: 0 00:26:06.715 Compare Command: Not Supported 00:26:06.715 Write Uncorrectable Command: Not Supported 00:26:06.715 Dataset Management Command: Not Supported 00:26:06.715 Write Zeroes Command: Not Supported 00:26:06.715 Set Features Save Field: Not Supported 00:26:06.715 Reservations: Not Supported 00:26:06.715 Timestamp: Not Supported 00:26:06.715 Copy: Not Supported 00:26:06.715 Volatile Write Cache: Not Present 00:26:06.715 Atomic Write Unit (Normal): 1 00:26:06.715 Atomic Write Unit (PFail): 1 00:26:06.715 Atomic Compare & Write Unit: 1 00:26:06.715 Fused Compare & Write: Not Supported 00:26:06.715 Scatter-Gather List 00:26:06.715 SGL Command Set: Supported 00:26:06.715 SGL Keyed: Not Supported 00:26:06.715 SGL Bit Bucket Descriptor: Not Supported 00:26:06.715 SGL Metadata Pointer: Not Supported 00:26:06.715 Oversized SGL: Not Supported 00:26:06.715 SGL Metadata Address: Not Supported 00:26:06.715 SGL Offset: Supported 00:26:06.715 Transport SGL Data Block: Not Supported 00:26:06.715 Replay Protected Memory Block: Not Supported 00:26:06.715 00:26:06.715 Firmware Slot Information 00:26:06.715 ========================= 00:26:06.715 Active slot: 0 00:26:06.715 00:26:06.715 00:26:06.715 Error Log 00:26:06.715 ========= 00:26:06.715 00:26:06.715 Active Namespaces 00:26:06.715 ================= 00:26:06.715 Discovery Log Page 00:26:06.715 ================== 00:26:06.715 Generation Counter: 2 00:26:06.715 Number of Records: 2 00:26:06.715 Record Format: 0 00:26:06.715 00:26:06.715 Discovery Log Entry 0 00:26:06.715 ---------------------- 00:26:06.715 Transport Type: 3 (TCP) 00:26:06.715 Address Family: 1 (IPv4) 00:26:06.715 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:06.715 Entry Flags: 00:26:06.715 Duplicate Returned Information: 0 00:26:06.715 Explicit Persistent Connection Support for Discovery: 0 00:26:06.715 Transport Requirements: 00:26:06.715 Secure Channel: Not Specified 00:26:06.715 Port ID: 1 (0x0001) 00:26:06.715 Controller ID: 65535 (0xffff) 00:26:06.715 Admin Max SQ Size: 32 00:26:06.715 Transport Service Identifier: 4420 00:26:06.715 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:06.715 Transport Address: 10.0.0.1 00:26:06.715 Discovery Log Entry 1 00:26:06.715 ---------------------- 00:26:06.715 Transport Type: 3 (TCP) 00:26:06.715 Address Family: 1 (IPv4) 00:26:06.715 Subsystem Type: 2 (NVM Subsystem) 00:26:06.715 Entry Flags: 00:26:06.715 Duplicate Returned Information: 0 00:26:06.715 Explicit Persistent Connection Support for Discovery: 0 00:26:06.715 Transport Requirements: 00:26:06.715 Secure Channel: Not Specified 00:26:06.715 Port ID: 1 (0x0001) 00:26:06.715 Controller ID: 65535 (0xffff) 00:26:06.715 Admin Max SQ Size: 32 00:26:06.715 Transport Service Identifier: 4420 00:26:06.715 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:06.715 Transport Address: 10.0.0.1 00:26:06.715 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:06.974 get_feature(0x01) failed 00:26:06.974 get_feature(0x02) failed 00:26:06.974 get_feature(0x04) failed 00:26:06.974 ===================================================== 00:26:06.974 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:06.974 ===================================================== 00:26:06.974 Controller Capabilities/Features 00:26:06.974 ================================ 00:26:06.974 Vendor ID: 0000 00:26:06.974 Subsystem Vendor ID: 0000 00:26:06.974 Serial Number: b94f5635af0ded651ddd 00:26:06.974 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:06.974 Firmware Version: 6.8.9-20 00:26:06.974 Recommended Arb Burst: 6 00:26:06.974 IEEE OUI Identifier: 00 00 00 00:26:06.974 Multi-path I/O 00:26:06.974 May have multiple subsystem ports: Yes 00:26:06.974 May have multiple controllers: Yes 00:26:06.974 Associated with SR-IOV VF: No 00:26:06.974 Max Data Transfer Size: Unlimited 00:26:06.974 Max Number of Namespaces: 1024 00:26:06.974 Max Number of I/O Queues: 128 00:26:06.974 NVMe Specification Version (VS): 1.3 00:26:06.974 NVMe Specification Version (Identify): 1.3 00:26:06.974 Maximum Queue Entries: 1024 00:26:06.974 Contiguous Queues Required: No 00:26:06.974 Arbitration Mechanisms Supported 00:26:06.974 Weighted Round Robin: Not Supported 00:26:06.974 Vendor Specific: Not Supported 00:26:06.974 Reset Timeout: 7500 ms 00:26:06.974 Doorbell Stride: 4 bytes 00:26:06.974 NVM Subsystem Reset: Not Supported 00:26:06.974 Command Sets Supported 00:26:06.974 NVM Command Set: Supported 00:26:06.974 Boot Partition: Not Supported 00:26:06.974 Memory Page Size Minimum: 4096 bytes 00:26:06.974 Memory Page Size Maximum: 4096 bytes 00:26:06.974 Persistent Memory Region: Not Supported 00:26:06.974 Optional Asynchronous Events Supported 00:26:06.974 Namespace Attribute Notices: Supported 00:26:06.974 Firmware Activation Notices: Not Supported 00:26:06.974 ANA Change Notices: Supported 00:26:06.974 PLE Aggregate Log Change Notices: Not Supported 00:26:06.974 LBA Status Info Alert Notices: Not Supported 00:26:06.974 EGE Aggregate Log Change Notices: Not Supported 00:26:06.974 Normal NVM Subsystem Shutdown event: Not Supported 00:26:06.974 Zone Descriptor Change Notices: Not Supported 00:26:06.974 Discovery Log Change Notices: Not Supported 00:26:06.974 Controller Attributes 00:26:06.974 128-bit Host Identifier: Supported 00:26:06.974 Non-Operational Permissive Mode: Not Supported 00:26:06.974 NVM Sets: Not Supported 00:26:06.974 Read Recovery Levels: Not Supported 00:26:06.974 Endurance Groups: Not Supported 00:26:06.974 Predictable Latency Mode: Not Supported 00:26:06.974 Traffic Based Keep ALive: Supported 00:26:06.974 Namespace Granularity: Not Supported 00:26:06.974 SQ Associations: Not Supported 00:26:06.974 UUID List: Not Supported 00:26:06.974 Multi-Domain Subsystem: Not Supported 00:26:06.974 Fixed Capacity Management: Not Supported 00:26:06.974 Variable Capacity Management: Not Supported 00:26:06.974 Delete Endurance Group: Not Supported 00:26:06.974 Delete NVM Set: Not Supported 00:26:06.974 Extended LBA Formats Supported: Not Supported 00:26:06.974 Flexible Data Placement Supported: Not Supported 00:26:06.974 00:26:06.974 Controller Memory Buffer Support 00:26:06.974 ================================ 00:26:06.974 Supported: No 00:26:06.974 00:26:06.974 Persistent Memory Region Support 00:26:06.974 ================================ 00:26:06.974 Supported: No 00:26:06.974 00:26:06.974 Admin Command Set Attributes 00:26:06.974 ============================ 00:26:06.974 Security Send/Receive: Not Supported 00:26:06.974 Format NVM: Not Supported 00:26:06.974 Firmware Activate/Download: Not Supported 00:26:06.974 Namespace Management: Not Supported 00:26:06.974 Device Self-Test: Not Supported 00:26:06.974 Directives: Not Supported 00:26:06.974 NVMe-MI: Not Supported 00:26:06.974 Virtualization Management: Not Supported 00:26:06.974 Doorbell Buffer Config: Not Supported 00:26:06.974 Get LBA Status Capability: Not Supported 00:26:06.974 Command & Feature Lockdown Capability: Not Supported 00:26:06.974 Abort Command Limit: 4 00:26:06.974 Async Event Request Limit: 4 00:26:06.974 Number of Firmware Slots: N/A 00:26:06.974 Firmware Slot 1 Read-Only: N/A 00:26:06.974 Firmware Activation Without Reset: N/A 00:26:06.974 Multiple Update Detection Support: N/A 00:26:06.974 Firmware Update Granularity: No Information Provided 00:26:06.974 Per-Namespace SMART Log: Yes 00:26:06.974 Asymmetric Namespace Access Log Page: Supported 00:26:06.974 ANA Transition Time : 10 sec 00:26:06.974 00:26:06.974 Asymmetric Namespace Access Capabilities 00:26:06.974 ANA Optimized State : Supported 00:26:06.974 ANA Non-Optimized State : Supported 00:26:06.974 ANA Inaccessible State : Supported 00:26:06.974 ANA Persistent Loss State : Supported 00:26:06.974 ANA Change State : Supported 00:26:06.974 ANAGRPID is not changed : No 00:26:06.974 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:06.974 00:26:06.974 ANA Group Identifier Maximum : 128 00:26:06.974 Number of ANA Group Identifiers : 128 00:26:06.974 Max Number of Allowed Namespaces : 1024 00:26:06.974 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:06.974 Command Effects Log Page: Supported 00:26:06.974 Get Log Page Extended Data: Supported 00:26:06.974 Telemetry Log Pages: Not Supported 00:26:06.974 Persistent Event Log Pages: Not Supported 00:26:06.974 Supported Log Pages Log Page: May Support 00:26:06.974 Commands Supported & Effects Log Page: Not Supported 00:26:06.974 Feature Identifiers & Effects Log Page:May Support 00:26:06.974 NVMe-MI Commands & Effects Log Page: May Support 00:26:06.974 Data Area 4 for Telemetry Log: Not Supported 00:26:06.974 Error Log Page Entries Supported: 128 00:26:06.974 Keep Alive: Supported 00:26:06.974 Keep Alive Granularity: 1000 ms 00:26:06.974 00:26:06.974 NVM Command Set Attributes 00:26:06.974 ========================== 00:26:06.974 Submission Queue Entry Size 00:26:06.974 Max: 64 00:26:06.974 Min: 64 00:26:06.974 Completion Queue Entry Size 00:26:06.974 Max: 16 00:26:06.974 Min: 16 00:26:06.974 Number of Namespaces: 1024 00:26:06.974 Compare Command: Not Supported 00:26:06.974 Write Uncorrectable Command: Not Supported 00:26:06.974 Dataset Management Command: Supported 00:26:06.974 Write Zeroes Command: Supported 00:26:06.974 Set Features Save Field: Not Supported 00:26:06.974 Reservations: Not Supported 00:26:06.974 Timestamp: Not Supported 00:26:06.974 Copy: Not Supported 00:26:06.974 Volatile Write Cache: Present 00:26:06.974 Atomic Write Unit (Normal): 1 00:26:06.974 Atomic Write Unit (PFail): 1 00:26:06.974 Atomic Compare & Write Unit: 1 00:26:06.974 Fused Compare & Write: Not Supported 00:26:06.974 Scatter-Gather List 00:26:06.974 SGL Command Set: Supported 00:26:06.974 SGL Keyed: Not Supported 00:26:06.974 SGL Bit Bucket Descriptor: Not Supported 00:26:06.974 SGL Metadata Pointer: Not Supported 00:26:06.974 Oversized SGL: Not Supported 00:26:06.974 SGL Metadata Address: Not Supported 00:26:06.974 SGL Offset: Supported 00:26:06.974 Transport SGL Data Block: Not Supported 00:26:06.974 Replay Protected Memory Block: Not Supported 00:26:06.974 00:26:06.975 Firmware Slot Information 00:26:06.975 ========================= 00:26:06.975 Active slot: 0 00:26:06.975 00:26:06.975 Asymmetric Namespace Access 00:26:06.975 =========================== 00:26:06.975 Change Count : 0 00:26:06.975 Number of ANA Group Descriptors : 1 00:26:06.975 ANA Group Descriptor : 0 00:26:06.975 ANA Group ID : 1 00:26:06.975 Number of NSID Values : 1 00:26:06.975 Change Count : 0 00:26:06.975 ANA State : 1 00:26:06.975 Namespace Identifier : 1 00:26:06.975 00:26:06.975 Commands Supported and Effects 00:26:06.975 ============================== 00:26:06.975 Admin Commands 00:26:06.975 -------------- 00:26:06.975 Get Log Page (02h): Supported 00:26:06.975 Identify (06h): Supported 00:26:06.975 Abort (08h): Supported 00:26:06.975 Set Features (09h): Supported 00:26:06.975 Get Features (0Ah): Supported 00:26:06.975 Asynchronous Event Request (0Ch): Supported 00:26:06.975 Keep Alive (18h): Supported 00:26:06.975 I/O Commands 00:26:06.975 ------------ 00:26:06.975 Flush (00h): Supported 00:26:06.975 Write (01h): Supported LBA-Change 00:26:06.975 Read (02h): Supported 00:26:06.975 Write Zeroes (08h): Supported LBA-Change 00:26:06.975 Dataset Management (09h): Supported 00:26:06.975 00:26:06.975 Error Log 00:26:06.975 ========= 00:26:06.975 Entry: 0 00:26:06.975 Error Count: 0x3 00:26:06.975 Submission Queue Id: 0x0 00:26:06.975 Command Id: 0x5 00:26:06.975 Phase Bit: 0 00:26:06.975 Status Code: 0x2 00:26:06.975 Status Code Type: 0x0 00:26:06.975 Do Not Retry: 1 00:26:06.975 Error Location: 0x28 00:26:06.975 LBA: 0x0 00:26:06.975 Namespace: 0x0 00:26:06.975 Vendor Log Page: 0x0 00:26:06.975 ----------- 00:26:06.975 Entry: 1 00:26:06.975 Error Count: 0x2 00:26:06.975 Submission Queue Id: 0x0 00:26:06.975 Command Id: 0x5 00:26:06.975 Phase Bit: 0 00:26:06.975 Status Code: 0x2 00:26:06.975 Status Code Type: 0x0 00:26:06.975 Do Not Retry: 1 00:26:06.975 Error Location: 0x28 00:26:06.975 LBA: 0x0 00:26:06.975 Namespace: 0x0 00:26:06.975 Vendor Log Page: 0x0 00:26:06.975 ----------- 00:26:06.975 Entry: 2 00:26:06.975 Error Count: 0x1 00:26:06.975 Submission Queue Id: 0x0 00:26:06.975 Command Id: 0x4 00:26:06.975 Phase Bit: 0 00:26:06.975 Status Code: 0x2 00:26:06.975 Status Code Type: 0x0 00:26:06.975 Do Not Retry: 1 00:26:06.975 Error Location: 0x28 00:26:06.975 LBA: 0x0 00:26:06.975 Namespace: 0x0 00:26:06.975 Vendor Log Page: 0x0 00:26:06.975 00:26:06.975 Number of Queues 00:26:06.975 ================ 00:26:06.975 Number of I/O Submission Queues: 128 00:26:06.975 Number of I/O Completion Queues: 128 00:26:06.975 00:26:06.975 ZNS Specific Controller Data 00:26:06.975 ============================ 00:26:06.975 Zone Append Size Limit: 0 00:26:06.975 00:26:06.975 00:26:06.975 Active Namespaces 00:26:06.975 ================= 00:26:06.975 get_feature(0x05) failed 00:26:06.975 Namespace ID:1 00:26:06.975 Command Set Identifier: NVM (00h) 00:26:06.975 Deallocate: Supported 00:26:06.975 Deallocated/Unwritten Error: Not Supported 00:26:06.975 Deallocated Read Value: Unknown 00:26:06.975 Deallocate in Write Zeroes: Not Supported 00:26:06.975 Deallocated Guard Field: 0xFFFF 00:26:06.975 Flush: Supported 00:26:06.975 Reservation: Not Supported 00:26:06.975 Namespace Sharing Capabilities: Multiple Controllers 00:26:06.975 Size (in LBAs): 3125627568 (1490GiB) 00:26:06.975 Capacity (in LBAs): 3125627568 (1490GiB) 00:26:06.975 Utilization (in LBAs): 3125627568 (1490GiB) 00:26:06.975 UUID: d31d4602-df21-4a8a-911a-bd74975659f4 00:26:06.975 Thin Provisioning: Not Supported 00:26:06.975 Per-NS Atomic Units: Yes 00:26:06.975 Atomic Boundary Size (Normal): 0 00:26:06.975 Atomic Boundary Size (PFail): 0 00:26:06.975 Atomic Boundary Offset: 0 00:26:06.975 NGUID/EUI64 Never Reused: No 00:26:06.975 ANA group ID: 1 00:26:06.975 Namespace Write Protected: No 00:26:06.975 Number of LBA Formats: 1 00:26:06.975 Current LBA Format: LBA Format #00 00:26:06.975 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:06.975 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:06.975 rmmod nvme_tcp 00:26:06.975 rmmod nvme_fabrics 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:06.975 18:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.505 18:34:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:09.505 18:34:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:09.505 18:34:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:09.505 18:34:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:26:09.505 18:34:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:09.505 18:34:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:09.505 18:34:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:09.505 18:34:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:09.505 18:34:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:26:09.505 18:34:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:26:09.505 18:34:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:12.035 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:12.035 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:12.035 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:12.035 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:12.035 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:12.035 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:12.035 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:12.035 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:12.035 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:12.035 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:12.035 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:12.035 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:12.035 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:12.035 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:12.035 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:12.035 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:13.409 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:13.409 00:26:13.409 real 0m17.285s 00:26:13.409 user 0m4.355s 00:26:13.409 sys 0m8.719s 00:26:13.409 18:34:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:13.409 18:34:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:13.409 ************************************ 00:26:13.409 END TEST nvmf_identify_kernel_target 00:26:13.409 ************************************ 00:26:13.409 18:34:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:13.409 18:34:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:13.409 18:34:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:13.409 18:34:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.668 ************************************ 00:26:13.668 START TEST nvmf_auth_host 00:26:13.668 ************************************ 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:13.668 * Looking for test storage... 00:26:13.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:13.668 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:13.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.669 --rc genhtml_branch_coverage=1 00:26:13.669 --rc genhtml_function_coverage=1 00:26:13.669 --rc genhtml_legend=1 00:26:13.669 --rc geninfo_all_blocks=1 00:26:13.669 --rc geninfo_unexecuted_blocks=1 00:26:13.669 00:26:13.669 ' 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:13.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.669 --rc genhtml_branch_coverage=1 00:26:13.669 --rc genhtml_function_coverage=1 00:26:13.669 --rc genhtml_legend=1 00:26:13.669 --rc geninfo_all_blocks=1 00:26:13.669 --rc geninfo_unexecuted_blocks=1 00:26:13.669 00:26:13.669 ' 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:13.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.669 --rc genhtml_branch_coverage=1 00:26:13.669 --rc genhtml_function_coverage=1 00:26:13.669 --rc genhtml_legend=1 00:26:13.669 --rc geninfo_all_blocks=1 00:26:13.669 --rc geninfo_unexecuted_blocks=1 00:26:13.669 00:26:13.669 ' 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:13.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.669 --rc genhtml_branch_coverage=1 00:26:13.669 --rc genhtml_function_coverage=1 00:26:13.669 --rc genhtml_legend=1 00:26:13.669 --rc geninfo_all_blocks=1 00:26:13.669 --rc geninfo_unexecuted_blocks=1 00:26:13.669 00:26:13.669 ' 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:13.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:13.669 18:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.373 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:20.373 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:20.373 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:20.373 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:20.373 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:20.373 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:20.373 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:20.373 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:20.374 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:20.374 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:20.374 Found net devices under 0000:86:00.0: cvl_0_0 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:20.374 Found net devices under 0000:86:00.1: cvl_0_1 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:20.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:26:20.374 00:26:20.374 --- 10.0.0.2 ping statistics --- 00:26:20.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.374 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:20.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:26:20.374 00:26:20.374 --- 10.0.0.1 ping statistics --- 00:26:20.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.374 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:20.374 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.375 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=553357 00:26:20.375 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:20.375 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 553357 00:26:20.375 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 553357 ']' 00:26:20.375 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.375 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:20.375 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.375 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:20.375 18:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.633 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:20.633 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:20.633 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:20.633 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:20.633 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.633 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=007fdcd5135cd4f31240b6d72c7db9d4 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.18j 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 007fdcd5135cd4f31240b6d72c7db9d4 0 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 007fdcd5135cd4f31240b6d72c7db9d4 0 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=007fdcd5135cd4f31240b6d72c7db9d4 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.18j 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.18j 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.18j 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=3a56419df4943c2b275b5eaec5c2c9a5e2bc8cfbcd0e63943999178973f655da 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.0BV 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 3a56419df4943c2b275b5eaec5c2c9a5e2bc8cfbcd0e63943999178973f655da 3 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 3a56419df4943c2b275b5eaec5c2c9a5e2bc8cfbcd0e63943999178973f655da 3 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=3a56419df4943c2b275b5eaec5c2c9a5e2bc8cfbcd0e63943999178973f655da 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:20.634 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.0BV 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.0BV 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.0BV 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=21022045356efb5e9bd149951fb06196e57259c8a7c3099f 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.bSL 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 21022045356efb5e9bd149951fb06196e57259c8a7c3099f 0 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 21022045356efb5e9bd149951fb06196e57259c8a7c3099f 0 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=21022045356efb5e9bd149951fb06196e57259c8a7c3099f 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:20.893 18:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.bSL 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.bSL 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.bSL 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=504fbeb215bd3ceb3dadc9ec6ffbe05f14a601de6eb022c3 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.pcX 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 504fbeb215bd3ceb3dadc9ec6ffbe05f14a601de6eb022c3 2 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 504fbeb215bd3ceb3dadc9ec6ffbe05f14a601de6eb022c3 2 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=504fbeb215bd3ceb3dadc9ec6ffbe05f14a601de6eb022c3 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.pcX 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.pcX 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.pcX 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=c6933dec75d0bf591f1bbd66ae2d90d8 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.PPf 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key c6933dec75d0bf591f1bbd66ae2d90d8 1 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 c6933dec75d0bf591f1bbd66ae2d90d8 1 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=c6933dec75d0bf591f1bbd66ae2d90d8 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.PPf 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.PPf 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.PPf 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=9514d206040127c22e3f7cb344cd6e5e 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.sAJ 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 9514d206040127c22e3f7cb344cd6e5e 1 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 9514d206040127c22e3f7cb344cd6e5e 1 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=9514d206040127c22e3f7cb344cd6e5e 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.sAJ 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.sAJ 00:26:20.893 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.sAJ 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=3a4142f003acf46af1fafde5db5a6d6e6efd02c22153a441 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.gCF 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 3a4142f003acf46af1fafde5db5a6d6e6efd02c22153a441 2 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 3a4142f003acf46af1fafde5db5a6d6e6efd02c22153a441 2 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=3a4142f003acf46af1fafde5db5a6d6e6efd02c22153a441 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.gCF 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.gCF 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.gCF 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6e7beccbf5af9c39d25a1b341d015672 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.oyT 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6e7beccbf5af9c39d25a1b341d015672 0 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6e7beccbf5af9c39d25a1b341d015672 0 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6e7beccbf5af9c39d25a1b341d015672 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.oyT 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.oyT 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.oyT 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=2c323455ab60b9b771a15c8a292a9baf0c1689d482aebe864e783414901dde19 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:21.152 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.jXK 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 2c323455ab60b9b771a15c8a292a9baf0c1689d482aebe864e783414901dde19 3 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 2c323455ab60b9b771a15c8a292a9baf0c1689d482aebe864e783414901dde19 3 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=2c323455ab60b9b771a15c8a292a9baf0c1689d482aebe864e783414901dde19 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.jXK 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.jXK 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.jXK 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 553357 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 553357 ']' 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:21.153 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.18j 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.0BV ]] 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0BV 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.bSL 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.pcX ]] 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pcX 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.PPf 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.sAJ ]] 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sAJ 00:26:21.412 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.gCF 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.oyT ]] 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.oyT 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.jXK 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:21.413 18:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:24.697 Waiting for block devices as requested 00:26:24.697 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:24.697 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:24.697 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:24.697 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:24.697 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:24.697 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:24.697 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:24.697 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:24.697 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:24.955 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:24.955 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:24.955 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:25.213 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:25.213 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:25.213 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:25.213 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:25.472 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:26.039 No valid GPT data, bailing 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:26.039 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:26.297 00:26:26.297 Discovery Log Number of Records 2, Generation counter 2 00:26:26.297 =====Discovery Log Entry 0====== 00:26:26.297 trtype: tcp 00:26:26.297 adrfam: ipv4 00:26:26.297 subtype: current discovery subsystem 00:26:26.297 treq: not specified, sq flow control disable supported 00:26:26.297 portid: 1 00:26:26.297 trsvcid: 4420 00:26:26.297 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:26.297 traddr: 10.0.0.1 00:26:26.297 eflags: none 00:26:26.297 sectype: none 00:26:26.297 =====Discovery Log Entry 1====== 00:26:26.297 trtype: tcp 00:26:26.297 adrfam: ipv4 00:26:26.297 subtype: nvme subsystem 00:26:26.297 treq: not specified, sq flow control disable supported 00:26:26.297 portid: 1 00:26:26.297 trsvcid: 4420 00:26:26.297 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:26.297 traddr: 10.0.0.1 00:26:26.297 eflags: none 00:26:26.297 sectype: none 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:26.297 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:26.298 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.298 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.298 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.298 nvme0n1 00:26:26.298 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.298 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.298 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.298 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.298 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.298 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.298 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.298 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.298 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.298 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.555 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: ]] 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.556 nvme0n1 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.556 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:26.814 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.814 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:26.814 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:26.814 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:26.814 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.814 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.814 18:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.814 nvme0n1 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:26.814 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: ]] 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.815 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.073 nvme0n1 00:26:27.073 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.073 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.073 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.073 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.073 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.073 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.073 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.073 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.073 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: ]] 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.074 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.332 nvme0n1 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.332 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.333 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.591 nvme0n1 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:27.591 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: ]] 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.592 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.850 nvme0n1 00:26:27.850 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.850 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.850 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.850 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.850 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.850 18:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:27.850 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:27.851 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.851 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.851 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:27.851 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.851 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:27.851 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:27.851 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:27.851 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:27.851 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.851 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.109 nvme0n1 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: ]] 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.109 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.368 nvme0n1 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: ]] 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.368 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.627 nvme0n1 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.627 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.885 nvme0n1 00:26:28.885 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.885 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.885 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.885 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.885 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.885 18:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: ]] 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.885 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:28.886 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.886 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:28.886 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:28.886 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:28.886 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:28.886 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.886 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.144 nvme0n1 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.144 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.402 nvme0n1 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:29.402 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:29.403 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: ]] 00:26:29.403 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:29.403 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:29.403 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.403 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:29.403 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:29.403 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:29.403 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.403 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:29.403 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.403 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.403 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.661 nvme0n1 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.661 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.919 18:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: ]] 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.919 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.920 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.920 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.920 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:29.920 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:29.920 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:29.920 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.920 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.920 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:29.920 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.920 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:29.920 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:29.920 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:29.920 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:29.920 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.920 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.178 nvme0n1 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.178 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.436 nvme0n1 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: ]] 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.436 18:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.003 nvme0n1 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.003 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.262 nvme0n1 00:26:31.262 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.262 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.262 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.262 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.262 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.262 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: ]] 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.520 18:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.778 nvme0n1 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:31.778 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: ]] 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:31.779 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:32.037 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:32.037 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.037 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.296 nvme0n1 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.296 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.862 nvme0n1 00:26:32.862 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.862 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.862 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.862 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.862 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: ]] 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.863 18:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.429 nvme0n1 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:33.429 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.430 18:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.997 nvme0n1 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: ]] 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.997 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.563 nvme0n1 00:26:34.563 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.563 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.563 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.563 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.563 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.563 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: ]] 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.822 18:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.391 nvme0n1 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.391 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:35.392 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.392 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.392 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.392 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.392 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:35.392 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:35.392 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:35.392 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.392 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.392 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:35.392 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.392 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:35.392 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:35.392 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:35.392 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:35.392 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.392 18:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.959 nvme0n1 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:35.959 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: ]] 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.960 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.218 nvme0n1 00:26:36.218 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.218 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.218 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.218 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.218 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.218 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.219 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.477 nvme0n1 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: ]] 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.477 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.736 nvme0n1 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:36.736 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: ]] 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.737 18:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.737 nvme0n1 00:26:36.737 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.995 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.995 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.995 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.995 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.995 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.995 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.995 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.995 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.995 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.995 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.995 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.995 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:36.995 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.995 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:36.995 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:36.995 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:36.995 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.996 nvme0n1 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.996 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: ]] 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:37.254 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:37.255 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:37.255 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.255 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.255 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:37.255 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.255 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:37.255 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:37.255 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:37.255 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:37.255 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.255 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.255 nvme0n1 00:26:37.255 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.255 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.255 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.255 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.255 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.255 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.513 nvme0n1 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.513 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.772 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.772 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.772 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.772 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.772 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: ]] 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.773 18:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.773 nvme0n1 00:26:37.773 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.773 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.773 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.773 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.773 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.773 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: ]] 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.032 nvme0n1 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.032 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.290 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.290 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.290 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.290 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.290 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.290 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.290 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:38.290 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.290 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:38.290 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:38.290 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.291 nvme0n1 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.291 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.549 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.549 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:38.549 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.549 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:38.549 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.549 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:38.549 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:38.549 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:38.549 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:38.549 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:38.549 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:38.549 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:38.549 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:38.549 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: ]] 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.550 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.809 nvme0n1 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.809 18:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.068 nvme0n1 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: ]] 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.069 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.327 nvme0n1 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: ]] 00:26:39.327 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.328 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.586 nvme0n1 00:26:39.586 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.586 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.586 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.586 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.586 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.586 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.586 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.586 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.586 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.586 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:39.844 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:39.845 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:39.845 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:39.845 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.845 18:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.103 nvme0n1 00:26:40.103 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.103 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.103 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.103 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: ]] 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.104 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.363 nvme0n1 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.363 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.622 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.622 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:40.622 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:40.622 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:40.622 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.622 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.622 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:40.622 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.622 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:40.622 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:40.622 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:40.622 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:40.622 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.622 18:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.881 nvme0n1 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: ]] 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.881 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.448 nvme0n1 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: ]] 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.448 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.707 nvme0n1 00:26:41.707 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.707 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.707 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.707 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.707 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.707 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.707 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.707 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.707 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.707 18:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.707 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.707 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.707 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:41.707 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.707 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:41.707 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:41.707 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:41.707 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:41.707 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:41.707 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:41.707 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:41.707 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:41.707 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:41.707 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.708 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.275 nvme0n1 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: ]] 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:42.275 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.276 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.276 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:42.276 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.276 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:42.276 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:42.276 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:42.276 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:42.276 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.276 18:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.842 nvme0n1 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:42.842 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.843 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.409 nvme0n1 00:26:43.409 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.409 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.409 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.409 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.409 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.409 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: ]] 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.668 18:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.236 nvme0n1 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: ]] 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.236 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.803 nvme0n1 00:26:44.803 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.803 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.803 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.803 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.803 18:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.803 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.370 nvme0n1 00:26:45.370 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.370 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.370 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.370 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.370 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.370 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.370 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.370 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.370 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.370 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.370 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.370 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:45.370 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:45.370 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.370 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:45.370 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: ]] 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.629 nvme0n1 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:45.629 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.630 18:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.889 nvme0n1 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: ]] 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.889 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.148 nvme0n1 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: ]] 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.148 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.407 nvme0n1 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.407 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:46.408 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.408 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:46.408 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:46.408 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:46.408 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:46.408 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.408 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.667 nvme0n1 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: ]] 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.667 18:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.926 nvme0n1 00:26:46.926 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.926 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.926 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.926 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.926 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.926 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.927 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.186 nvme0n1 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: ]] 00:26:47.186 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.187 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.446 nvme0n1 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: ]] 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.446 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.705 nvme0n1 00:26:47.705 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.705 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.705 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.705 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.705 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.706 18:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.706 nvme0n1 00:26:47.706 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:47.965 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: ]] 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.966 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.225 nvme0n1 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.225 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.484 nvme0n1 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: ]] 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.484 18:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.743 nvme0n1 00:26:48.743 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.743 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.743 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.743 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.743 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.743 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.743 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.743 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.743 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.743 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: ]] 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.002 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.260 nvme0n1 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:49.260 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.261 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.520 nvme0n1 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: ]] 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.520 18:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.088 nvme0n1 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.088 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:50.089 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:50.089 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:50.089 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:50.089 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.089 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.347 nvme0n1 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: ]] 00:26:50.347 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.348 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.914 nvme0n1 00:26:50.914 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.914 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.914 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.914 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.914 18:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.914 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.914 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.914 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.914 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.914 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.914 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.914 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.914 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:50.914 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.914 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:50.914 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:50.914 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:50.914 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:50.914 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:50.914 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:50.914 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: ]] 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.915 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.172 nvme0n1 00:26:51.172 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.173 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.173 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.173 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.173 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.173 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.173 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.173 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.173 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.173 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.431 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.689 nvme0n1 00:26:51.689 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.689 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.689 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.689 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.689 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.689 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.689 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.689 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.689 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.689 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.689 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.689 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA3ZmRjZDUxMzVjZDRmMzEyNDBiNmQ3MmM3ZGI5ZDRa0L9W: 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: ]] 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E1NjQxOWRmNDk0M2MyYjI3NWI1ZWFlYzVjMmM5YTVlMmJjOGNmYmNkMGU2Mzk0Mzk5OTE3ODk3M2Y2NTVkYenqWyA=: 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.690 18:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.255 nvme0n1 00:26:52.255 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.255 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.255 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.255 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.255 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.255 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.514 18:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.081 nvme0n1 00:26:53.081 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.081 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.081 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.081 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.081 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.081 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.081 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.081 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.081 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.081 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.081 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.081 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.081 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:53.081 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.081 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: ]] 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.082 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.649 nvme0n1 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E0MTQyZjAwM2FjZjQ2YWYxZmFmZGU1ZGI1YTZkNmU2ZWZkMDJjMjIxNTNhNDQxKtfQcQ==: 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: ]] 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU3YmVjY2JmNWFmOWMzOWQyNWExYjM0MWQwMTU2NzJU7Fjz: 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.649 18:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.216 nvme0n1 00:26:54.216 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.216 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.216 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.216 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.216 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.216 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.216 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.216 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.216 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.216 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmMzMjM0NTVhYjYwYjliNzcxYTE1YzhhMjkyYTliYWYwYzE2ODlkNDgyYWViZTg2NGU3ODM0MTQ5MDFkZGUxOX37+6I=: 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:54.474 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:54.475 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:54.475 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:54.475 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.475 18:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.042 nvme0n1 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.042 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.042 request: 00:26:55.042 { 00:26:55.042 "name": "nvme0", 00:26:55.042 "trtype": "tcp", 00:26:55.042 "traddr": "10.0.0.1", 00:26:55.042 "adrfam": "ipv4", 00:26:55.042 "trsvcid": "4420", 00:26:55.042 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:55.042 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:55.042 "prchk_reftag": false, 00:26:55.042 "prchk_guard": false, 00:26:55.042 "hdgst": false, 00:26:55.042 "ddgst": false, 00:26:55.042 "allow_unrecognized_csi": false, 00:26:55.042 "method": "bdev_nvme_attach_controller", 00:26:55.042 "req_id": 1 00:26:55.043 } 00:26:55.043 Got JSON-RPC error response 00:26:55.043 response: 00:26:55.043 { 00:26:55.043 "code": -5, 00:26:55.043 "message": "Input/output error" 00:26:55.043 } 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.043 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.302 request: 00:26:55.302 { 00:26:55.302 "name": "nvme0", 00:26:55.302 "trtype": "tcp", 00:26:55.302 "traddr": "10.0.0.1", 00:26:55.302 "adrfam": "ipv4", 00:26:55.302 "trsvcid": "4420", 00:26:55.302 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:55.302 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:55.302 "prchk_reftag": false, 00:26:55.302 "prchk_guard": false, 00:26:55.302 "hdgst": false, 00:26:55.302 "ddgst": false, 00:26:55.302 "dhchap_key": "key2", 00:26:55.302 "allow_unrecognized_csi": false, 00:26:55.302 "method": "bdev_nvme_attach_controller", 00:26:55.302 "req_id": 1 00:26:55.302 } 00:26:55.302 Got JSON-RPC error response 00:26:55.302 response: 00:26:55.302 { 00:26:55.302 "code": -5, 00:26:55.302 "message": "Input/output error" 00:26:55.302 } 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.302 request: 00:26:55.302 { 00:26:55.302 "name": "nvme0", 00:26:55.302 "trtype": "tcp", 00:26:55.302 "traddr": "10.0.0.1", 00:26:55.302 "adrfam": "ipv4", 00:26:55.302 "trsvcid": "4420", 00:26:55.302 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:55.302 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:55.302 "prchk_reftag": false, 00:26:55.302 "prchk_guard": false, 00:26:55.302 "hdgst": false, 00:26:55.302 "ddgst": false, 00:26:55.302 "dhchap_key": "key1", 00:26:55.302 "dhchap_ctrlr_key": "ckey2", 00:26:55.302 "allow_unrecognized_csi": false, 00:26:55.302 "method": "bdev_nvme_attach_controller", 00:26:55.302 "req_id": 1 00:26:55.302 } 00:26:55.302 Got JSON-RPC error response 00:26:55.302 response: 00:26:55.302 { 00:26:55.302 "code": -5, 00:26:55.302 "message": "Input/output error" 00:26:55.302 } 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.302 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.561 nvme0n1 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: ]] 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.561 request: 00:26:55.561 { 00:26:55.561 "name": "nvme0", 00:26:55.561 "dhchap_key": "key1", 00:26:55.561 "dhchap_ctrlr_key": "ckey2", 00:26:55.561 "method": "bdev_nvme_set_keys", 00:26:55.561 "req_id": 1 00:26:55.561 } 00:26:55.561 Got JSON-RPC error response 00:26:55.561 response: 00:26:55.561 { 00:26:55.561 "code": -13, 00:26:55.561 "message": "Permission denied" 00:26:55.561 } 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:55.561 18:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:56.938 18:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.938 18:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:56.938 18:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.938 18:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.938 18:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.938 18:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:56.938 18:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjEwMjIwNDUzNTZlZmI1ZTliZDE0OTk1MWZiMDYxOTZlNTcyNTljOGE3YzMwOTlmSOHsnA==: 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: ]] 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA0ZmJlYjIxNWJkM2NlYjNkYWRjOWVjNmZmYmUwNWYxNGE2MDFkZTZlYjAyMmMztgp2Tg==: 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.874 18:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.874 nvme0n1 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MzNkZWM3NWQwYmY1OTFmMWJiZDY2YWUyZDkwZDgSAMv1: 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: ]] 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTUxNGQyMDYwNDAxMjdjMjJlM2Y3Y2IzNDRjZDZlNWUAvmJ/: 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.874 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.168 request: 00:26:58.168 { 00:26:58.168 "name": "nvme0", 00:26:58.168 "dhchap_key": "key2", 00:26:58.168 "dhchap_ctrlr_key": "ckey1", 00:26:58.168 "method": "bdev_nvme_set_keys", 00:26:58.168 "req_id": 1 00:26:58.168 } 00:26:58.168 Got JSON-RPC error response 00:26:58.168 response: 00:26:58.168 { 00:26:58.168 "code": -13, 00:26:58.168 "message": "Permission denied" 00:26:58.168 } 00:26:58.168 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:58.168 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:58.168 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:58.168 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:58.168 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:58.168 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.168 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.168 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.168 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:58.168 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.168 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:58.168 18:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:59.141 rmmod nvme_tcp 00:26:59.141 rmmod nvme_fabrics 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 553357 ']' 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 553357 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 553357 ']' 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 553357 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 553357 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 553357' 00:26:59.141 killing process with pid 553357 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 553357 00:26:59.141 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 553357 00:26:59.400 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:59.400 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:59.400 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:59.400 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:59.400 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:26:59.400 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:26:59.400 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:59.400 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:59.400 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:59.400 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.400 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.400 18:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.933 18:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:01.933 18:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:01.933 18:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:01.933 18:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:01.933 18:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:01.933 18:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:27:01.933 18:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:01.933 18:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:01.933 18:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:01.933 18:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:01.933 18:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:27:01.933 18:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:27:01.933 18:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:04.479 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:04.479 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:04.479 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:04.479 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:04.479 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:04.479 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:04.479 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:04.479 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:04.479 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:04.479 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:04.479 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:04.480 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:04.480 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:04.480 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:04.480 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:04.480 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:05.856 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:06.114 18:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.18j /tmp/spdk.key-null.bSL /tmp/spdk.key-sha256.PPf /tmp/spdk.key-sha384.gCF /tmp/spdk.key-sha512.jXK /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:06.114 18:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:08.645 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:08.645 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:08.645 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:08.645 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:08.645 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:08.645 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:08.645 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:08.646 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:08.646 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:08.646 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:08.646 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:08.646 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:08.646 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:08.646 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:08.646 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:08.646 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:08.646 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:08.905 00:27:08.905 real 0m55.317s 00:27:08.905 user 0m49.449s 00:27:08.905 sys 0m12.857s 00:27:08.905 18:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:08.905 18:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.905 ************************************ 00:27:08.905 END TEST nvmf_auth_host 00:27:08.905 ************************************ 00:27:08.905 18:35:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:08.905 18:35:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:08.905 18:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:08.905 18:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:08.905 18:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.905 ************************************ 00:27:08.905 START TEST nvmf_digest 00:27:08.905 ************************************ 00:27:08.905 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:09.165 * Looking for test storage... 00:27:09.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.165 --rc genhtml_branch_coverage=1 00:27:09.165 --rc genhtml_function_coverage=1 00:27:09.165 --rc genhtml_legend=1 00:27:09.165 --rc geninfo_all_blocks=1 00:27:09.165 --rc geninfo_unexecuted_blocks=1 00:27:09.165 00:27:09.165 ' 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.165 --rc genhtml_branch_coverage=1 00:27:09.165 --rc genhtml_function_coverage=1 00:27:09.165 --rc genhtml_legend=1 00:27:09.165 --rc geninfo_all_blocks=1 00:27:09.165 --rc geninfo_unexecuted_blocks=1 00:27:09.165 00:27:09.165 ' 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.165 --rc genhtml_branch_coverage=1 00:27:09.165 --rc genhtml_function_coverage=1 00:27:09.165 --rc genhtml_legend=1 00:27:09.165 --rc geninfo_all_blocks=1 00:27:09.165 --rc geninfo_unexecuted_blocks=1 00:27:09.165 00:27:09.165 ' 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.165 --rc genhtml_branch_coverage=1 00:27:09.165 --rc genhtml_function_coverage=1 00:27:09.165 --rc genhtml_legend=1 00:27:09.165 --rc geninfo_all_blocks=1 00:27:09.165 --rc geninfo_unexecuted_blocks=1 00:27:09.165 00:27:09.165 ' 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:09.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:09.165 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:09.166 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:09.166 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:09.166 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:09.166 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:09.166 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:09.166 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.166 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.166 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.166 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:09.166 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:09.166 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:09.166 18:35:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:15.733 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:15.733 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:15.733 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:15.734 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:15.734 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:15.734 Found net devices under 0000:86:00.0: cvl_0_0 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:15.734 Found net devices under 0000:86:00.1: cvl_0_1 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:15.734 18:35:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:15.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:15.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:27:15.734 00:27:15.734 --- 10.0.0.2 ping statistics --- 00:27:15.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.734 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:15.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:15.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:27:15.734 00:27:15.734 --- 10.0.0.1 ping statistics --- 00:27:15.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.734 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:15.734 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:15.734 ************************************ 00:27:15.735 START TEST nvmf_digest_clean 00:27:15.735 ************************************ 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=567376 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 567376 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 567376 ']' 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:15.735 18:35:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:15.735 [2024-10-08 18:35:08.361756] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:27:15.735 [2024-10-08 18:35:08.361798] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.735 [2024-10-08 18:35:08.433435] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.735 [2024-10-08 18:35:08.511730] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.735 [2024-10-08 18:35:08.511770] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.735 [2024-10-08 18:35:08.511777] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:15.735 [2024-10-08 18:35:08.511784] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:15.735 [2024-10-08 18:35:08.511789] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.735 [2024-10-08 18:35:08.512335] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.994 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:15.994 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:15.994 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:15.994 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:15.994 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:15.994 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.994 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:15.994 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:15.994 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:15.994 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.994 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:15.994 null0 00:27:15.994 [2024-10-08 18:35:09.309608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.252 [2024-10-08 18:35:09.333797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.252 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.252 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:16.252 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:16.252 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:16.252 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:16.252 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:16.253 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:16.253 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:16.253 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=567452 00:27:16.253 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 567452 /var/tmp/bperf.sock 00:27:16.253 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:16.253 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 567452 ']' 00:27:16.253 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:16.253 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:16.253 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:16.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:16.253 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:16.253 18:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:16.253 [2024-10-08 18:35:09.386464] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:27:16.253 [2024-10-08 18:35:09.386507] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567452 ] 00:27:16.253 [2024-10-08 18:35:09.453881] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.253 [2024-10-08 18:35:09.532739] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.188 18:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:17.188 18:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:17.188 18:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:17.188 18:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:17.188 18:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:17.188 18:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:17.188 18:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:17.446 nvme0n1 00:27:17.704 18:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:17.704 18:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:17.704 Running I/O for 2 seconds... 00:27:19.574 25560.00 IOPS, 99.84 MiB/s [2024-10-08T16:35:12.897Z] 26081.00 IOPS, 101.88 MiB/s 00:27:19.574 Latency(us) 00:27:19.574 [2024-10-08T16:35:12.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.574 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:19.574 nvme0n1 : 2.00 26102.26 101.96 0.00 0.00 4898.89 2262.55 15416.56 00:27:19.574 [2024-10-08T16:35:12.897Z] =================================================================================================================== 00:27:19.574 [2024-10-08T16:35:12.897Z] Total : 26102.26 101.96 0.00 0.00 4898.89 2262.55 15416.56 00:27:19.574 { 00:27:19.574 "results": [ 00:27:19.574 { 00:27:19.574 "job": "nvme0n1", 00:27:19.574 "core_mask": "0x2", 00:27:19.574 "workload": "randread", 00:27:19.574 "status": "finished", 00:27:19.574 "queue_depth": 128, 00:27:19.574 "io_size": 4096, 00:27:19.574 "runtime": 2.003275, 00:27:19.574 "iops": 26102.257553256542, 00:27:19.574 "mibps": 101.96194356740837, 00:27:19.574 "io_failed": 0, 00:27:19.574 "io_timeout": 0, 00:27:19.574 "avg_latency_us": 4898.888513400541, 00:27:19.574 "min_latency_us": 2262.552380952381, 00:27:19.574 "max_latency_us": 15416.56380952381 00:27:19.574 } 00:27:19.574 ], 00:27:19.574 "core_count": 1 00:27:19.574 } 00:27:19.833 18:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:19.833 18:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:19.833 18:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:19.833 18:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:19.833 | select(.opcode=="crc32c") 00:27:19.834 | "\(.module_name) \(.executed)"' 00:27:19.834 18:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:19.834 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:19.834 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:19.834 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:19.834 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:19.834 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 567452 00:27:19.834 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 567452 ']' 00:27:19.834 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 567452 00:27:19.834 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:19.834 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:19.834 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 567452 00:27:19.834 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:19.834 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:19.834 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 567452' 00:27:19.834 killing process with pid 567452 00:27:19.834 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 567452 00:27:19.834 Received shutdown signal, test time was about 2.000000 seconds 00:27:19.834 00:27:19.834 Latency(us) 00:27:19.834 [2024-10-08T16:35:13.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.834 [2024-10-08T16:35:13.157Z] =================================================================================================================== 00:27:19.834 [2024-10-08T16:35:13.157Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:19.834 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 567452 00:27:20.092 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:20.092 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:20.092 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:20.092 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:20.092 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:20.092 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:20.092 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:20.092 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=568104 00:27:20.092 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 568104 /var/tmp/bperf.sock 00:27:20.092 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:20.092 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 568104 ']' 00:27:20.092 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:20.092 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:20.092 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:20.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:20.092 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:20.092 18:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:20.092 [2024-10-08 18:35:13.356582] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:27:20.093 [2024-10-08 18:35:13.356628] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid568104 ] 00:27:20.093 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:20.093 Zero copy mechanism will not be used. 00:27:20.351 [2024-10-08 18:35:13.423078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.351 [2024-10-08 18:35:13.492799] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.939 18:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:20.939 18:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:20.939 18:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:20.939 18:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:20.939 18:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:21.198 18:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:21.198 18:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:21.766 nvme0n1 00:27:21.766 18:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:21.766 18:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:21.766 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:21.766 Zero copy mechanism will not be used. 00:27:21.766 Running I/O for 2 seconds... 00:27:23.639 5640.00 IOPS, 705.00 MiB/s [2024-10-08T16:35:16.962Z] 5926.00 IOPS, 740.75 MiB/s 00:27:23.639 Latency(us) 00:27:23.639 [2024-10-08T16:35:16.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.639 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:23.639 nvme0n1 : 2.00 5925.35 740.67 0.00 0.00 2697.61 651.46 6335.15 00:27:23.639 [2024-10-08T16:35:16.962Z] =================================================================================================================== 00:27:23.639 [2024-10-08T16:35:16.962Z] Total : 5925.35 740.67 0.00 0.00 2697.61 651.46 6335.15 00:27:23.639 { 00:27:23.639 "results": [ 00:27:23.639 { 00:27:23.639 "job": "nvme0n1", 00:27:23.639 "core_mask": "0x2", 00:27:23.639 "workload": "randread", 00:27:23.639 "status": "finished", 00:27:23.639 "queue_depth": 16, 00:27:23.639 "io_size": 131072, 00:27:23.639 "runtime": 2.002919, 00:27:23.639 "iops": 5925.351948830682, 00:27:23.639 "mibps": 740.6689936038352, 00:27:23.639 "io_failed": 0, 00:27:23.639 "io_timeout": 0, 00:27:23.639 "avg_latency_us": 2697.60915210169, 00:27:23.639 "min_latency_us": 651.4590476190476, 00:27:23.639 "max_latency_us": 6335.1466666666665 00:27:23.639 } 00:27:23.639 ], 00:27:23.639 "core_count": 1 00:27:23.639 } 00:27:23.898 18:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:23.898 18:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:23.898 18:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:23.898 18:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:23.898 | select(.opcode=="crc32c") 00:27:23.898 | "\(.module_name) \(.executed)"' 00:27:23.898 18:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:23.898 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:23.898 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:23.898 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:23.898 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:23.898 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 568104 00:27:23.898 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 568104 ']' 00:27:23.898 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 568104 00:27:23.898 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:23.898 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:23.898 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 568104 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 568104' 00:27:24.158 killing process with pid 568104 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 568104 00:27:24.158 Received shutdown signal, test time was about 2.000000 seconds 00:27:24.158 00:27:24.158 Latency(us) 00:27:24.158 [2024-10-08T16:35:17.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.158 [2024-10-08T16:35:17.481Z] =================================================================================================================== 00:27:24.158 [2024-10-08T16:35:17.481Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 568104 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=568803 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 568803 /var/tmp/bperf.sock 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 568803 ']' 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:24.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:24.158 18:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:24.158 [2024-10-08 18:35:17.475361] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:27:24.158 [2024-10-08 18:35:17.475415] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid568803 ] 00:27:24.417 [2024-10-08 18:35:17.543419] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.417 [2024-10-08 18:35:17.611064] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.353 18:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:25.353 18:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:25.353 18:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:25.353 18:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:25.353 18:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:25.353 18:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:25.353 18:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:25.612 nvme0n1 00:27:25.612 18:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:25.612 18:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:25.612 Running I/O for 2 seconds... 00:27:27.924 27165.00 IOPS, 106.11 MiB/s [2024-10-08T16:35:21.247Z] 27050.50 IOPS, 105.67 MiB/s 00:27:27.924 Latency(us) 00:27:27.924 [2024-10-08T16:35:21.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.924 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:27.924 nvme0n1 : 2.00 27054.22 105.68 0.00 0.00 4723.72 3479.65 12295.80 00:27:27.924 [2024-10-08T16:35:21.247Z] =================================================================================================================== 00:27:27.924 [2024-10-08T16:35:21.247Z] Total : 27054.22 105.68 0.00 0.00 4723.72 3479.65 12295.80 00:27:27.924 { 00:27:27.924 "results": [ 00:27:27.925 { 00:27:27.925 "job": "nvme0n1", 00:27:27.925 "core_mask": "0x2", 00:27:27.925 "workload": "randwrite", 00:27:27.925 "status": "finished", 00:27:27.925 "queue_depth": 128, 00:27:27.925 "io_size": 4096, 00:27:27.925 "runtime": 2.004456, 00:27:27.925 "iops": 27054.223190731052, 00:27:27.925 "mibps": 105.68055933879317, 00:27:27.925 "io_failed": 0, 00:27:27.925 "io_timeout": 0, 00:27:27.925 "avg_latency_us": 4723.7170381337, 00:27:27.925 "min_latency_us": 3479.649523809524, 00:27:27.925 "max_latency_us": 12295.801904761905 00:27:27.925 } 00:27:27.925 ], 00:27:27.925 "core_count": 1 00:27:27.925 } 00:27:27.925 18:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:27.925 18:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:27.925 18:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:27.925 18:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:27.925 | select(.opcode=="crc32c") 00:27:27.925 | "\(.module_name) \(.executed)"' 00:27:27.925 18:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:27.925 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:27.925 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:27.925 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:27.925 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:27.925 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 568803 00:27:27.925 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 568803 ']' 00:27:27.925 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 568803 00:27:27.925 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:27.925 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:27.925 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 568803 00:27:27.925 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:27.925 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:27.925 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 568803' 00:27:27.925 killing process with pid 568803 00:27:27.925 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 568803 00:27:27.925 Received shutdown signal, test time was about 2.000000 seconds 00:27:27.925 00:27:27.925 Latency(us) 00:27:27.925 [2024-10-08T16:35:21.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.925 [2024-10-08T16:35:21.248Z] =================================================================================================================== 00:27:27.925 [2024-10-08T16:35:21.248Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:27.925 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 568803 00:27:28.184 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:28.184 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:28.184 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:28.184 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:28.184 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:28.184 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:28.184 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:28.184 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=569499 00:27:28.184 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 569499 /var/tmp/bperf.sock 00:27:28.184 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:28.184 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 569499 ']' 00:27:28.184 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:28.184 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:28.184 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:28.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:28.184 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:28.184 18:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:28.184 [2024-10-08 18:35:21.423303] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:27:28.184 [2024-10-08 18:35:21.423357] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid569499 ] 00:27:28.184 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:28.184 Zero copy mechanism will not be used. 00:27:28.184 [2024-10-08 18:35:21.490779] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.443 [2024-10-08 18:35:21.570020] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.010 18:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:29.010 18:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:29.010 18:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:29.010 18:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:29.010 18:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:29.269 18:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:29.269 18:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:29.835 nvme0n1 00:27:29.836 18:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:29.836 18:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:29.836 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:29.836 Zero copy mechanism will not be used. 00:27:29.836 Running I/O for 2 seconds... 00:27:32.145 5825.00 IOPS, 728.12 MiB/s [2024-10-08T16:35:25.468Z] 6145.00 IOPS, 768.12 MiB/s 00:27:32.145 Latency(us) 00:27:32.145 [2024-10-08T16:35:25.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.145 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:32.145 nvme0n1 : 2.00 6143.07 767.88 0.00 0.00 2600.48 1771.03 7365.00 00:27:32.145 [2024-10-08T16:35:25.468Z] =================================================================================================================== 00:27:32.145 [2024-10-08T16:35:25.468Z] Total : 6143.07 767.88 0.00 0.00 2600.48 1771.03 7365.00 00:27:32.145 { 00:27:32.145 "results": [ 00:27:32.145 { 00:27:32.145 "job": "nvme0n1", 00:27:32.145 "core_mask": "0x2", 00:27:32.145 "workload": "randwrite", 00:27:32.145 "status": "finished", 00:27:32.145 "queue_depth": 16, 00:27:32.145 "io_size": 131072, 00:27:32.145 "runtime": 2.003071, 00:27:32.145 "iops": 6143.0673201299405, 00:27:32.145 "mibps": 767.8834150162426, 00:27:32.145 "io_failed": 0, 00:27:32.145 "io_timeout": 0, 00:27:32.145 "avg_latency_us": 2600.4794743135776, 00:27:32.145 "min_latency_us": 1771.032380952381, 00:27:32.145 "max_latency_us": 7364.998095238096 00:27:32.145 } 00:27:32.145 ], 00:27:32.145 "core_count": 1 00:27:32.145 } 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:32.145 | select(.opcode=="crc32c") 00:27:32.145 | "\(.module_name) \(.executed)"' 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 569499 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 569499 ']' 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 569499 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 569499 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 569499' 00:27:32.145 killing process with pid 569499 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 569499 00:27:32.145 Received shutdown signal, test time was about 2.000000 seconds 00:27:32.145 00:27:32.145 Latency(us) 00:27:32.145 [2024-10-08T16:35:25.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.145 [2024-10-08T16:35:25.468Z] =================================================================================================================== 00:27:32.145 [2024-10-08T16:35:25.468Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:32.145 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 569499 00:27:32.404 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 567376 00:27:32.404 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 567376 ']' 00:27:32.404 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 567376 00:27:32.404 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:32.404 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:32.404 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 567376 00:27:32.404 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:32.404 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:32.404 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 567376' 00:27:32.404 killing process with pid 567376 00:27:32.404 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 567376 00:27:32.404 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 567376 00:27:32.404 00:27:32.404 real 0m17.409s 00:27:32.404 user 0m33.413s 00:27:32.404 sys 0m4.849s 00:27:32.404 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:32.404 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:32.404 ************************************ 00:27:32.404 END TEST nvmf_digest_clean 00:27:32.404 ************************************ 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:32.663 ************************************ 00:27:32.663 START TEST nvmf_digest_error 00:27:32.663 ************************************ 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=570224 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 570224 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 570224 ']' 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:32.663 18:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:32.663 [2024-10-08 18:35:25.843168] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:27:32.663 [2024-10-08 18:35:25.843210] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.663 [2024-10-08 18:35:25.915771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.921 [2024-10-08 18:35:25.993148] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.922 [2024-10-08 18:35:25.993185] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.922 [2024-10-08 18:35:25.993192] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.922 [2024-10-08 18:35:25.993199] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.922 [2024-10-08 18:35:25.993204] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.922 [2024-10-08 18:35:25.993726] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.489 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:33.489 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:33.489 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:33.489 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:33.489 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:33.489 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.489 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:33.489 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.489 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:33.489 [2024-10-08 18:35:26.707811] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:33.489 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.489 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:33.489 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:33.489 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.489 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:33.489 null0 00:27:33.490 [2024-10-08 18:35:26.797723] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.748 [2024-10-08 18:35:26.821913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.748 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.748 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:33.748 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:33.748 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:33.748 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:33.748 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:33.748 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=570469 00:27:33.748 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 570469 /var/tmp/bperf.sock 00:27:33.748 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:33.748 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 570469 ']' 00:27:33.748 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:33.748 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:33.748 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:33.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:33.748 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:33.748 18:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:33.748 [2024-10-08 18:35:26.873005] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:27:33.748 [2024-10-08 18:35:26.873051] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid570469 ] 00:27:33.748 [2024-10-08 18:35:26.939155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.748 [2024-10-08 18:35:27.016446] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.685 18:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:34.685 18:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:34.685 18:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:34.685 18:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:34.685 18:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:34.685 18:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.685 18:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.685 18:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.685 18:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:34.685 18:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:34.946 nvme0n1 00:27:34.946 18:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:34.946 18:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.946 18:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.946 18:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.946 18:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:34.946 18:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:35.205 Running I/O for 2 seconds... 00:27:35.205 [2024-10-08 18:35:28.285100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.285135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.285146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.296806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.296831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.296841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.308881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.308903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.308913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.317254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.317274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.317283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.330254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.330281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.330289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.338658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.338678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.338691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.348584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.348605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.348612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.359468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.359489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.359498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.370971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.370992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.371000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.380258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.380279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.380287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.393017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.393037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.393045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.406062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.406082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.406090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.417472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.417492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.417500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.425704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.425724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.425732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.437863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.437882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.437890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.450676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.450696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.450704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.463173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.463193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.463201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.474331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.474351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.474358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.483355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.483380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.483389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.496343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.496364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.496373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.505221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.505241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.505249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.514018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.514038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.514046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.205 [2024-10-08 18:35:28.523899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.205 [2024-10-08 18:35:28.523919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.205 [2024-10-08 18:35:28.523931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.464 [2024-10-08 18:35:28.533487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.464 [2024-10-08 18:35:28.533508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.464 [2024-10-08 18:35:28.533516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.464 [2024-10-08 18:35:28.543980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.464 [2024-10-08 18:35:28.544001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.464 [2024-10-08 18:35:28.544009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.464 [2024-10-08 18:35:28.552799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.464 [2024-10-08 18:35:28.552820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.464 [2024-10-08 18:35:28.552829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.464 [2024-10-08 18:35:28.564823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.464 [2024-10-08 18:35:28.564844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.464 [2024-10-08 18:35:28.564852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.464 [2024-10-08 18:35:28.575193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.464 [2024-10-08 18:35:28.575214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.464 [2024-10-08 18:35:28.575222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.464 [2024-10-08 18:35:28.584738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.464 [2024-10-08 18:35:28.584759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.464 [2024-10-08 18:35:28.584767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.464 [2024-10-08 18:35:28.594415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.464 [2024-10-08 18:35:28.594435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.464 [2024-10-08 18:35:28.594443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.464 [2024-10-08 18:35:28.603175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.464 [2024-10-08 18:35:28.603194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.464 [2024-10-08 18:35:28.603202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.464 [2024-10-08 18:35:28.612736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.464 [2024-10-08 18:35:28.612759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.464 [2024-10-08 18:35:28.612768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.464 [2024-10-08 18:35:28.620996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.464 [2024-10-08 18:35:28.621016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.464 [2024-10-08 18:35:28.621024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.464 [2024-10-08 18:35:28.630729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.464 [2024-10-08 18:35:28.630749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.465 [2024-10-08 18:35:28.630757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.465 [2024-10-08 18:35:28.641082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.465 [2024-10-08 18:35:28.641103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.465 [2024-10-08 18:35:28.641111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.465 [2024-10-08 18:35:28.649283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.465 [2024-10-08 18:35:28.649303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.465 [2024-10-08 18:35:28.649311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.465 [2024-10-08 18:35:28.660393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.465 [2024-10-08 18:35:28.660414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.465 [2024-10-08 18:35:28.660422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.465 [2024-10-08 18:35:28.671098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.465 [2024-10-08 18:35:28.671119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.465 [2024-10-08 18:35:28.671127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.465 [2024-10-08 18:35:28.683674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.465 [2024-10-08 18:35:28.683694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.465 [2024-10-08 18:35:28.683702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.465 [2024-10-08 18:35:28.691721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.465 [2024-10-08 18:35:28.691742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.465 [2024-10-08 18:35:28.691750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.465 [2024-10-08 18:35:28.701101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.465 [2024-10-08 18:35:28.701122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.465 [2024-10-08 18:35:28.701129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.465 [2024-10-08 18:35:28.711921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.465 [2024-10-08 18:35:28.711941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.465 [2024-10-08 18:35:28.711949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.465 [2024-10-08 18:35:28.721282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.465 [2024-10-08 18:35:28.721301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.465 [2024-10-08 18:35:28.721309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.465 [2024-10-08 18:35:28.733258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.465 [2024-10-08 18:35:28.733278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.465 [2024-10-08 18:35:28.733286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.465 [2024-10-08 18:35:28.741692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.465 [2024-10-08 18:35:28.741711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.465 [2024-10-08 18:35:28.741718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.465 [2024-10-08 18:35:28.753775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.465 [2024-10-08 18:35:28.753795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.465 [2024-10-08 18:35:28.753803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.465 [2024-10-08 18:35:28.764523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.465 [2024-10-08 18:35:28.764543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.465 [2024-10-08 18:35:28.764551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.465 [2024-10-08 18:35:28.772392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.465 [2024-10-08 18:35:28.772411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.465 [2024-10-08 18:35:28.772419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.465 [2024-10-08 18:35:28.783265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.465 [2024-10-08 18:35:28.783285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.465 [2024-10-08 18:35:28.783298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.724 [2024-10-08 18:35:28.792517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.724 [2024-10-08 18:35:28.792537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.724 [2024-10-08 18:35:28.792546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.724 [2024-10-08 18:35:28.802545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.724 [2024-10-08 18:35:28.802567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.724 [2024-10-08 18:35:28.802575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.724 [2024-10-08 18:35:28.812885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.724 [2024-10-08 18:35:28.812905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.724 [2024-10-08 18:35:28.812914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.724 [2024-10-08 18:35:28.821182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.724 [2024-10-08 18:35:28.821202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.724 [2024-10-08 18:35:28.821210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.724 [2024-10-08 18:35:28.832235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.724 [2024-10-08 18:35:28.832254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.724 [2024-10-08 18:35:28.832262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.724 [2024-10-08 18:35:28.841686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.724 [2024-10-08 18:35:28.841706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.724 [2024-10-08 18:35:28.841713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.724 [2024-10-08 18:35:28.850789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.724 [2024-10-08 18:35:28.850808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.724 [2024-10-08 18:35:28.850816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.724 [2024-10-08 18:35:28.859524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.724 [2024-10-08 18:35:28.859543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.724 [2024-10-08 18:35:28.859551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.724 [2024-10-08 18:35:28.868584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.724 [2024-10-08 18:35:28.868603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.724 [2024-10-08 18:35:28.868610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.724 [2024-10-08 18:35:28.878090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.724 [2024-10-08 18:35:28.878110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.724 [2024-10-08 18:35:28.878118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.724 [2024-10-08 18:35:28.888539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.725 [2024-10-08 18:35:28.888559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.725 [2024-10-08 18:35:28.888567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.725 [2024-10-08 18:35:28.897482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.725 [2024-10-08 18:35:28.897502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.725 [2024-10-08 18:35:28.897510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.725 [2024-10-08 18:35:28.907274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.725 [2024-10-08 18:35:28.907294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.725 [2024-10-08 18:35:28.907301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.725 [2024-10-08 18:35:28.919225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.725 [2024-10-08 18:35:28.919244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.725 [2024-10-08 18:35:28.919252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.725 [2024-10-08 18:35:28.927960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.725 [2024-10-08 18:35:28.927983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.725 [2024-10-08 18:35:28.927991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.725 [2024-10-08 18:35:28.937916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.725 [2024-10-08 18:35:28.937937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.725 [2024-10-08 18:35:28.937945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.725 [2024-10-08 18:35:28.950510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.725 [2024-10-08 18:35:28.950531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.725 [2024-10-08 18:35:28.950542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.725 [2024-10-08 18:35:28.959163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.725 [2024-10-08 18:35:28.959183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.725 [2024-10-08 18:35:28.959192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.725 [2024-10-08 18:35:28.969812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.725 [2024-10-08 18:35:28.969832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.725 [2024-10-08 18:35:28.969841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.725 [2024-10-08 18:35:28.980113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.725 [2024-10-08 18:35:28.980132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.725 [2024-10-08 18:35:28.980140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.725 [2024-10-08 18:35:28.991674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.725 [2024-10-08 18:35:28.991694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.725 [2024-10-08 18:35:28.991702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.725 [2024-10-08 18:35:29.003726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.725 [2024-10-08 18:35:29.003747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.725 [2024-10-08 18:35:29.003755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.725 [2024-10-08 18:35:29.014499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.725 [2024-10-08 18:35:29.014520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.725 [2024-10-08 18:35:29.014528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.725 [2024-10-08 18:35:29.028163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.725 [2024-10-08 18:35:29.028184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.725 [2024-10-08 18:35:29.028191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.725 [2024-10-08 18:35:29.037864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.725 [2024-10-08 18:35:29.037882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.725 [2024-10-08 18:35:29.037890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.047116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.047141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.047149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.056589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.056609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.056617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.067253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.067274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.067283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.076858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.076878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.076886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.086243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.086264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.086272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.098114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.098134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.098143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.108451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.108472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.108480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.118786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.118806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.118814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.131282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.131302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.131309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.144869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.144889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.144898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.157248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.157268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.157277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.165556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.165576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.165583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.176476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.176496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.176503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.188939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.188959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.188967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.200206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.200226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.200234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.210289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.210309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.210317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.220792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.220811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.220819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.985 [2024-10-08 18:35:29.229764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.985 [2024-10-08 18:35:29.229784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.985 [2024-10-08 18:35:29.229796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.986 [2024-10-08 18:35:29.242207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.986 [2024-10-08 18:35:29.242228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.986 [2024-10-08 18:35:29.242236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.986 [2024-10-08 18:35:29.253327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.986 [2024-10-08 18:35:29.253347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.986 [2024-10-08 18:35:29.253355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.986 [2024-10-08 18:35:29.261361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.986 [2024-10-08 18:35:29.261388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.986 [2024-10-08 18:35:29.261396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.986 [2024-10-08 18:35:29.272506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.986 [2024-10-08 18:35:29.272526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.986 [2024-10-08 18:35:29.272534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.986 24475.00 IOPS, 95.61 MiB/s [2024-10-08T16:35:29.309Z] [2024-10-08 18:35:29.283431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.986 [2024-10-08 18:35:29.283452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.986 [2024-10-08 18:35:29.283460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.986 [2024-10-08 18:35:29.292224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.986 [2024-10-08 18:35:29.292244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.986 [2024-10-08 18:35:29.292252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.986 [2024-10-08 18:35:29.303794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:35.986 [2024-10-08 18:35:29.303814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.986 [2024-10-08 18:35:29.303822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.245 [2024-10-08 18:35:29.314502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.245 [2024-10-08 18:35:29.314524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.245 [2024-10-08 18:35:29.314533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.245 [2024-10-08 18:35:29.323931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.245 [2024-10-08 18:35:29.323954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.245 [2024-10-08 18:35:29.323962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.245 [2024-10-08 18:35:29.333883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.245 [2024-10-08 18:35:29.333904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.245 [2024-10-08 18:35:29.333912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.245 [2024-10-08 18:35:29.344046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.245 [2024-10-08 18:35:29.344072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.245 [2024-10-08 18:35:29.344081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.245 [2024-10-08 18:35:29.355864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.245 [2024-10-08 18:35:29.355885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.245 [2024-10-08 18:35:29.355893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.245 [2024-10-08 18:35:29.367508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.245 [2024-10-08 18:35:29.367528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.245 [2024-10-08 18:35:29.367535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.245 [2024-10-08 18:35:29.376887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.245 [2024-10-08 18:35:29.376907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.245 [2024-10-08 18:35:29.376915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.245 [2024-10-08 18:35:29.387115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.245 [2024-10-08 18:35:29.387136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.245 [2024-10-08 18:35:29.387144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.245 [2024-10-08 18:35:29.399545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.245 [2024-10-08 18:35:29.399567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.245 [2024-10-08 18:35:29.399575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.245 [2024-10-08 18:35:29.410936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.245 [2024-10-08 18:35:29.410956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.245 [2024-10-08 18:35:29.410968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.245 [2024-10-08 18:35:29.419130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.245 [2024-10-08 18:35:29.419150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.245 [2024-10-08 18:35:29.419158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.245 [2024-10-08 18:35:29.430831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.245 [2024-10-08 18:35:29.430851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.245 [2024-10-08 18:35:29.430859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.245 [2024-10-08 18:35:29.440913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.246 [2024-10-08 18:35:29.440933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.246 [2024-10-08 18:35:29.440940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.246 [2024-10-08 18:35:29.448994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.246 [2024-10-08 18:35:29.449014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.246 [2024-10-08 18:35:29.449022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.246 [2024-10-08 18:35:29.459109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.246 [2024-10-08 18:35:29.459129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.246 [2024-10-08 18:35:29.459137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.246 [2024-10-08 18:35:29.471318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.246 [2024-10-08 18:35:29.471339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.246 [2024-10-08 18:35:29.471347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.246 [2024-10-08 18:35:29.481085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.246 [2024-10-08 18:35:29.481106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.246 [2024-10-08 18:35:29.481114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.246 [2024-10-08 18:35:29.492282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.246 [2024-10-08 18:35:29.492302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.246 [2024-10-08 18:35:29.492310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.246 [2024-10-08 18:35:29.500766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.246 [2024-10-08 18:35:29.500788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.246 [2024-10-08 18:35:29.500797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.246 [2024-10-08 18:35:29.510680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.246 [2024-10-08 18:35:29.510701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.246 [2024-10-08 18:35:29.510709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.246 [2024-10-08 18:35:29.521697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.246 [2024-10-08 18:35:29.521717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.246 [2024-10-08 18:35:29.521725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.246 [2024-10-08 18:35:29.530480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.246 [2024-10-08 18:35:29.530500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.246 [2024-10-08 18:35:29.530509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.246 [2024-10-08 18:35:29.541356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.246 [2024-10-08 18:35:29.541383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.246 [2024-10-08 18:35:29.541392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.246 [2024-10-08 18:35:29.551048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.246 [2024-10-08 18:35:29.551068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.246 [2024-10-08 18:35:29.551075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.246 [2024-10-08 18:35:29.561079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.246 [2024-10-08 18:35:29.561102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.246 [2024-10-08 18:35:29.561110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.506 [2024-10-08 18:35:29.570225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.506 [2024-10-08 18:35:29.570246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.506 [2024-10-08 18:35:29.570255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.506 [2024-10-08 18:35:29.580951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.506 [2024-10-08 18:35:29.580974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.506 [2024-10-08 18:35:29.580984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.506 [2024-10-08 18:35:29.588823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.506 [2024-10-08 18:35:29.588848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.506 [2024-10-08 18:35:29.588856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.506 [2024-10-08 18:35:29.599436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.506 [2024-10-08 18:35:29.599456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.506 [2024-10-08 18:35:29.599464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.506 [2024-10-08 18:35:29.610597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.506 [2024-10-08 18:35:29.610618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.506 [2024-10-08 18:35:29.610626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.506 [2024-10-08 18:35:29.620149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.506 [2024-10-08 18:35:29.620169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.506 [2024-10-08 18:35:29.620177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.506 [2024-10-08 18:35:29.630984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.506 [2024-10-08 18:35:29.631005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.506 [2024-10-08 18:35:29.631013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.506 [2024-10-08 18:35:29.640070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.506 [2024-10-08 18:35:29.640092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.506 [2024-10-08 18:35:29.640100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.506 [2024-10-08 18:35:29.651879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.506 [2024-10-08 18:35:29.651900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.506 [2024-10-08 18:35:29.651908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.506 [2024-10-08 18:35:29.661792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.506 [2024-10-08 18:35:29.661814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.506 [2024-10-08 18:35:29.661823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.506 [2024-10-08 18:35:29.673847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.506 [2024-10-08 18:35:29.673867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.506 [2024-10-08 18:35:29.673878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.506 [2024-10-08 18:35:29.686193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.506 [2024-10-08 18:35:29.686213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.507 [2024-10-08 18:35:29.686220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.507 [2024-10-08 18:35:29.698678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.507 [2024-10-08 18:35:29.698699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.507 [2024-10-08 18:35:29.698707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.507 [2024-10-08 18:35:29.709679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.507 [2024-10-08 18:35:29.709699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.507 [2024-10-08 18:35:29.709707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.507 [2024-10-08 18:35:29.722099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.507 [2024-10-08 18:35:29.722119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.507 [2024-10-08 18:35:29.722127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.507 [2024-10-08 18:35:29.733919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.507 [2024-10-08 18:35:29.733938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.507 [2024-10-08 18:35:29.733946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.507 [2024-10-08 18:35:29.746435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.507 [2024-10-08 18:35:29.746454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.507 [2024-10-08 18:35:29.746462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.507 [2024-10-08 18:35:29.757857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.507 [2024-10-08 18:35:29.757877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.507 [2024-10-08 18:35:29.757885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.507 [2024-10-08 18:35:29.766645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.507 [2024-10-08 18:35:29.766665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.507 [2024-10-08 18:35:29.766673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.507 [2024-10-08 18:35:29.778658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.507 [2024-10-08 18:35:29.778678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.507 [2024-10-08 18:35:29.778687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.507 [2024-10-08 18:35:29.790981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.507 [2024-10-08 18:35:29.791002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.507 [2024-10-08 18:35:29.791010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.507 [2024-10-08 18:35:29.800432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.507 [2024-10-08 18:35:29.800451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.507 [2024-10-08 18:35:29.800459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.507 [2024-10-08 18:35:29.810172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.507 [2024-10-08 18:35:29.810191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.507 [2024-10-08 18:35:29.810199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.507 [2024-10-08 18:35:29.819195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.507 [2024-10-08 18:35:29.819214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.507 [2024-10-08 18:35:29.819222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.766 [2024-10-08 18:35:29.830061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.766 [2024-10-08 18:35:29.830081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.766 [2024-10-08 18:35:29.830090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.766 [2024-10-08 18:35:29.838259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.766 [2024-10-08 18:35:29.838280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.766 [2024-10-08 18:35:29.838288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.766 [2024-10-08 18:35:29.847397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.766 [2024-10-08 18:35:29.847418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.766 [2024-10-08 18:35:29.847426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.766 [2024-10-08 18:35:29.858625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.766 [2024-10-08 18:35:29.858646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.766 [2024-10-08 18:35:29.858657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.766 [2024-10-08 18:35:29.870737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.766 [2024-10-08 18:35:29.870758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.766 [2024-10-08 18:35:29.870766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.766 [2024-10-08 18:35:29.881636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.766 [2024-10-08 18:35:29.881656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.766 [2024-10-08 18:35:29.881664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.766 [2024-10-08 18:35:29.890362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.766 [2024-10-08 18:35:29.890390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.766 [2024-10-08 18:35:29.890399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.766 [2024-10-08 18:35:29.902931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.766 [2024-10-08 18:35:29.902951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.766 [2024-10-08 18:35:29.902959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.766 [2024-10-08 18:35:29.914702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.766 [2024-10-08 18:35:29.914721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.766 [2024-10-08 18:35:29.914729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.766 [2024-10-08 18:35:29.922970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.766 [2024-10-08 18:35:29.922989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.766 [2024-10-08 18:35:29.922997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.766 [2024-10-08 18:35:29.935534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.766 [2024-10-08 18:35:29.935554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.766 [2024-10-08 18:35:29.935562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.766 [2024-10-08 18:35:29.947691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.766 [2024-10-08 18:35:29.947710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.766 [2024-10-08 18:35:29.947718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.766 [2024-10-08 18:35:29.959119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.766 [2024-10-08 18:35:29.959143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.766 [2024-10-08 18:35:29.959151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.766 [2024-10-08 18:35:29.966659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.767 [2024-10-08 18:35:29.966680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.767 [2024-10-08 18:35:29.966688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.767 [2024-10-08 18:35:29.977346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.767 [2024-10-08 18:35:29.977367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.767 [2024-10-08 18:35:29.977386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.767 [2024-10-08 18:35:29.988822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.767 [2024-10-08 18:35:29.988841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.767 [2024-10-08 18:35:29.988849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.767 [2024-10-08 18:35:29.997032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.767 [2024-10-08 18:35:29.997051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.767 [2024-10-08 18:35:29.997059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.767 [2024-10-08 18:35:30.009767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.767 [2024-10-08 18:35:30.009788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.767 [2024-10-08 18:35:30.009796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.767 [2024-10-08 18:35:30.023543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.767 [2024-10-08 18:35:30.023565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.767 [2024-10-08 18:35:30.023574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.767 [2024-10-08 18:35:30.035240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.767 [2024-10-08 18:35:30.035261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.767 [2024-10-08 18:35:30.035269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.767 [2024-10-08 18:35:30.043592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.767 [2024-10-08 18:35:30.043621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.767 [2024-10-08 18:35:30.043629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.767 [2024-10-08 18:35:30.054873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.767 [2024-10-08 18:35:30.054902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.767 [2024-10-08 18:35:30.054915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.767 [2024-10-08 18:35:30.063400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.767 [2024-10-08 18:35:30.063423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.767 [2024-10-08 18:35:30.063433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.767 [2024-10-08 18:35:30.073585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.767 [2024-10-08 18:35:30.073607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.767 [2024-10-08 18:35:30.073616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.767 [2024-10-08 18:35:30.085811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:36.767 [2024-10-08 18:35:30.085833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.767 [2024-10-08 18:35:30.085842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 [2024-10-08 18:35:30.099087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.099114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.099124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 [2024-10-08 18:35:30.111008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.111031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.111040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 [2024-10-08 18:35:30.121104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.121125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.121133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 [2024-10-08 18:35:30.129668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.129688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.129696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 [2024-10-08 18:35:30.141933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.141953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.141967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 [2024-10-08 18:35:30.152454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.152476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.152484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 [2024-10-08 18:35:30.162697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.162716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.162724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 [2024-10-08 18:35:30.170900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.170920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.170929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 [2024-10-08 18:35:30.181593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.181613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.181621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 [2024-10-08 18:35:30.192088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.192108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.192116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 [2024-10-08 18:35:30.200759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.200779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.200788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 [2024-10-08 18:35:30.211647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.211667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.211676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 [2024-10-08 18:35:30.223554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.223574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.223583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 [2024-10-08 18:35:30.234702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.234726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.234734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 [2024-10-08 18:35:30.243405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.243426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.243434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 [2024-10-08 18:35:30.255524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.255544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.255552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 [2024-10-08 18:35:30.264269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.264289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.264297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 24328.00 IOPS, 95.03 MiB/s [2024-10-08T16:35:30.349Z] [2024-10-08 18:35:30.275530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb330) 00:27:37.026 [2024-10-08 18:35:30.275549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.026 [2024-10-08 18:35:30.275557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.026 00:27:37.026 Latency(us) 00:27:37.026 [2024-10-08T16:35:30.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.026 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:37.026 nvme0n1 : 2.00 24338.70 95.07 0.00 0.00 5252.75 2434.19 19848.05 00:27:37.026 [2024-10-08T16:35:30.349Z] =================================================================================================================== 00:27:37.026 [2024-10-08T16:35:30.349Z] Total : 24338.70 95.07 0.00 0.00 5252.75 2434.19 19848.05 00:27:37.026 { 00:27:37.026 "results": [ 00:27:37.026 { 00:27:37.026 "job": "nvme0n1", 00:27:37.026 "core_mask": "0x2", 00:27:37.026 "workload": "randread", 00:27:37.026 "status": "finished", 00:27:37.026 "queue_depth": 128, 00:27:37.026 "io_size": 4096, 00:27:37.026 "runtime": 2.003517, 00:27:37.026 "iops": 24338.70039535477, 00:27:37.026 "mibps": 95.07304841935456, 00:27:37.026 "io_failed": 0, 00:27:37.026 "io_timeout": 0, 00:27:37.026 "avg_latency_us": 5252.751938267012, 00:27:37.026 "min_latency_us": 2434.194285714286, 00:27:37.026 "max_latency_us": 19848.045714285716 00:27:37.026 } 00:27:37.026 ], 00:27:37.026 "core_count": 1 00:27:37.026 } 00:27:37.026 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:37.026 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:37.026 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:37.026 | .driver_specific 00:27:37.026 | .nvme_error 00:27:37.026 | .status_code 00:27:37.026 | .command_transient_transport_error' 00:27:37.026 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:37.286 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 191 > 0 )) 00:27:37.286 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 570469 00:27:37.286 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 570469 ']' 00:27:37.286 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 570469 00:27:37.286 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:37.286 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:37.286 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 570469 00:27:37.286 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:37.286 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:37.286 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 570469' 00:27:37.286 killing process with pid 570469 00:27:37.286 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 570469 00:27:37.286 Received shutdown signal, test time was about 2.000000 seconds 00:27:37.286 00:27:37.286 Latency(us) 00:27:37.286 [2024-10-08T16:35:30.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.286 [2024-10-08T16:35:30.609Z] =================================================================================================================== 00:27:37.286 [2024-10-08T16:35:30.609Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:37.286 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 570469 00:27:37.545 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:37.545 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:37.545 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:37.545 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:37.545 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:37.545 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=570974 00:27:37.545 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 570974 /var/tmp/bperf.sock 00:27:37.545 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:37.545 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 570974 ']' 00:27:37.545 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:37.545 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:37.545 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:37.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:37.545 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:37.545 18:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:37.545 [2024-10-08 18:35:30.789077] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:27:37.545 [2024-10-08 18:35:30.789126] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid570974 ] 00:27:37.545 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:37.545 Zero copy mechanism will not be used. 00:27:37.545 [2024-10-08 18:35:30.859598] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.803 [2024-10-08 18:35:30.934697] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.371 18:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:38.371 18:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:38.371 18:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:38.371 18:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:38.630 18:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:38.630 18:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.630 18:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:38.630 18:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.630 18:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:38.630 18:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:39.197 nvme0n1 00:27:39.197 18:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:39.197 18:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.197 18:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:39.197 18:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.197 18:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:39.197 18:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:39.197 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:39.197 Zero copy mechanism will not be used. 00:27:39.197 Running I/O for 2 seconds... 00:27:39.197 [2024-10-08 18:35:32.354912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.354949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.354960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.361394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.361435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.361446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.368471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.368493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.368505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.376882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.376904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.376912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.384443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.384465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.384474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.392180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.392202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.392210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.399792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.399813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.399822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.407433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.407454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.407463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.413476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.413496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.413504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.418846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.418867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.418875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.424064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.424084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.424092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.429356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.429387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.429396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.434645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.434666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.434674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.440069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.440090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.440097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.445577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.445598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.445606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.451668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.451689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.451697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.457245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.457264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.457273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.462811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.462831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.462839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.468127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.468148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.468155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.473484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.473504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.473512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.478835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.478855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.478863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.484484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.484503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.484511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.489974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.489995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.490003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.495301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.495323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.495331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.500735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.500754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.500762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.506081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.506101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.506109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.511471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.511492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.511500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.198 [2024-10-08 18:35:32.516984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.198 [2024-10-08 18:35:32.517005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.198 [2024-10-08 18:35:32.517014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.458 [2024-10-08 18:35:32.522632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.458 [2024-10-08 18:35:32.522664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.458 [2024-10-08 18:35:32.522676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.458 [2024-10-08 18:35:32.528156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.458 [2024-10-08 18:35:32.528177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.458 [2024-10-08 18:35:32.528185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.458 [2024-10-08 18:35:32.533495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.458 [2024-10-08 18:35:32.533516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.458 [2024-10-08 18:35:32.533524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.458 [2024-10-08 18:35:32.539021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.458 [2024-10-08 18:35:32.539041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.458 [2024-10-08 18:35:32.539049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.458 [2024-10-08 18:35:32.544397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.458 [2024-10-08 18:35:32.544416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.458 [2024-10-08 18:35:32.544424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.458 [2024-10-08 18:35:32.549993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.458 [2024-10-08 18:35:32.550012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.550020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.555360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.555385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.555393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.560756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.560776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.560783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.566216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.566237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.566245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.571785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.571805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.571812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.577350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.577370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.577383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.582840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.582859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.582867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.588255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.588275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.588282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.593584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.593604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.593612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.598854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.598874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.598881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.604207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.604227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.604235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.610564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.610584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.610592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.618803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.618825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.618841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.625826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.625847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.625855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.634933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.634954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.634963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.642274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.642296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.642304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.650260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.650281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.650290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.658629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.658649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.658657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.667091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.667112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.667120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.675599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.675620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.675628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.683223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.683243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.683251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.691858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.691883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.691891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.699834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.699856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.699864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.707651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.707672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.707680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.716727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.716747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.716755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.723701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.723721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.723729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.731401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.731421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.731429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.459 [2024-10-08 18:35:32.739172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.459 [2024-10-08 18:35:32.739193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.459 [2024-10-08 18:35:32.739201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.460 [2024-10-08 18:35:32.744772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.460 [2024-10-08 18:35:32.744792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.460 [2024-10-08 18:35:32.744800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.460 [2024-10-08 18:35:32.751924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.460 [2024-10-08 18:35:32.751944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.460 [2024-10-08 18:35:32.751952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.460 [2024-10-08 18:35:32.757567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.460 [2024-10-08 18:35:32.757587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.460 [2024-10-08 18:35:32.757595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.460 [2024-10-08 18:35:32.763733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.460 [2024-10-08 18:35:32.763753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.460 [2024-10-08 18:35:32.763760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.460 [2024-10-08 18:35:32.768654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.460 [2024-10-08 18:35:32.768674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.460 [2024-10-08 18:35:32.768682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.460 [2024-10-08 18:35:32.774297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.460 [2024-10-08 18:35:32.774317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.460 [2024-10-08 18:35:32.774325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.780310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.780332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.780340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.786098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.786118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.786126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.791864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.791884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.791892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.797606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.797626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.797634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.803840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.803865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.803873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.809892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.809912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.809920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.815366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.815390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.815398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.820818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.820837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.820845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.826294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.826315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.826323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.831826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.831847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.831854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.837478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.837498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.837506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.843266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.843286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.843294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.849393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.849414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.849422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.854903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.854925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.854932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.860254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.860276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.860284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.865868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.865889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.865896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.871283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.871305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.871315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.876772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.876793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.876801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.882889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.882911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.882919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.890465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.890487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.890495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.720 [2024-10-08 18:35:32.898447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.720 [2024-10-08 18:35:32.898470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.720 [2024-10-08 18:35:32.898478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:32.905637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:32.905658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:32.905670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:32.912459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:32.912480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:32.912488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:32.920230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:32.920252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:32.920261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:32.927743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:32.927765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:32.927774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:32.936022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:32.936043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:32.936051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:32.944250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:32.944272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:32.944280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:32.952371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:32.952397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:32.952406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:32.959957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:32.959979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:32.959987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:32.965722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:32.965743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:32.965751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:32.971583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:32.971608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:32.971616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:32.977073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:32.977094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:32.977101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:32.983478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:32.983499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:32.983508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:32.990310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:32.990332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:32.990340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:32.996536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:32.996558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:32.996566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:33.002276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:33.002298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:33.002305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:33.008321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:33.008342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:33.008350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:33.013142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:33.013164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:33.013172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:33.018816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:33.018837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:33.018845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:33.024688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:33.024709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:33.024717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:33.030261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:33.030283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:33.030291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:33.035562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:33.035582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:33.035590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.721 [2024-10-08 18:35:33.038762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.721 [2024-10-08 18:35:33.038782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.721 [2024-10-08 18:35:33.038790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.981 [2024-10-08 18:35:33.045293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.981 [2024-10-08 18:35:33.045316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.981 [2024-10-08 18:35:33.045324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.981 [2024-10-08 18:35:33.050981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.981 [2024-10-08 18:35:33.051003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.981 [2024-10-08 18:35:33.051011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.981 [2024-10-08 18:35:33.056444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.981 [2024-10-08 18:35:33.056466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.981 [2024-10-08 18:35:33.056474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.981 [2024-10-08 18:35:33.061787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.981 [2024-10-08 18:35:33.061808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.981 [2024-10-08 18:35:33.061817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.981 [2024-10-08 18:35:33.067120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.981 [2024-10-08 18:35:33.067140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.981 [2024-10-08 18:35:33.067152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.981 [2024-10-08 18:35:33.072409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.981 [2024-10-08 18:35:33.072430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.981 [2024-10-08 18:35:33.072437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.981 [2024-10-08 18:35:33.077647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.981 [2024-10-08 18:35:33.077668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.981 [2024-10-08 18:35:33.077676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.082996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.083016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.083024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.088267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.088288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.088296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.093549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.093570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.093578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.098833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.098853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.098861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.104140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.104160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.104168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.109333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.109354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.109361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.114605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.114629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.114637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.119961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.119985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.119993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.125793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.125815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.125824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.131141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.131162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.131170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.136393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.136413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.136421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.141641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.141662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.141670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.146880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.146901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.146909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.152209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.152230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.152237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.156965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.156985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.156996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.162839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.162860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.162868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.169791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.169812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.169820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.177134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.177156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.177165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.184446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.184466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.184474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.190114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.190135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.190143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.195340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.195362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.195369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.200549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.200569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.200577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.205780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.205799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.205807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.211010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.211034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.211042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.216200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.216221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.216229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.221410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.221430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.221438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.226552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.226572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.226580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.231707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.231727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.982 [2024-10-08 18:35:33.231735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.982 [2024-10-08 18:35:33.236952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.982 [2024-10-08 18:35:33.236971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.983 [2024-10-08 18:35:33.236979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.983 [2024-10-08 18:35:33.242177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.983 [2024-10-08 18:35:33.242197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.983 [2024-10-08 18:35:33.242205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.983 [2024-10-08 18:35:33.247398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.983 [2024-10-08 18:35:33.247417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.983 [2024-10-08 18:35:33.247425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.983 [2024-10-08 18:35:33.252621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.983 [2024-10-08 18:35:33.252641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.983 [2024-10-08 18:35:33.252648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.983 [2024-10-08 18:35:33.257843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.983 [2024-10-08 18:35:33.257863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.983 [2024-10-08 18:35:33.257871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.983 [2024-10-08 18:35:33.263061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.983 [2024-10-08 18:35:33.263080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.983 [2024-10-08 18:35:33.263088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.983 [2024-10-08 18:35:33.268293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.983 [2024-10-08 18:35:33.268314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.983 [2024-10-08 18:35:33.268322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.983 [2024-10-08 18:35:33.273451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.983 [2024-10-08 18:35:33.273471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.983 [2024-10-08 18:35:33.273478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.983 [2024-10-08 18:35:33.278634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.983 [2024-10-08 18:35:33.278654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.983 [2024-10-08 18:35:33.278662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.983 [2024-10-08 18:35:33.283865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.983 [2024-10-08 18:35:33.283885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.983 [2024-10-08 18:35:33.283893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.983 [2024-10-08 18:35:33.289113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.983 [2024-10-08 18:35:33.289133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.983 [2024-10-08 18:35:33.289141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.983 [2024-10-08 18:35:33.294281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.983 [2024-10-08 18:35:33.294302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.983 [2024-10-08 18:35:33.294309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.983 [2024-10-08 18:35:33.299482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:39.983 [2024-10-08 18:35:33.299503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.983 [2024-10-08 18:35:33.299514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.242 [2024-10-08 18:35:33.304659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.242 [2024-10-08 18:35:33.304680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.242 [2024-10-08 18:35:33.304688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.242 [2024-10-08 18:35:33.309830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.242 [2024-10-08 18:35:33.309851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.242 [2024-10-08 18:35:33.309859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.242 [2024-10-08 18:35:33.315018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.242 [2024-10-08 18:35:33.315039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.242 [2024-10-08 18:35:33.315046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.242 [2024-10-08 18:35:33.320202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.242 [2024-10-08 18:35:33.320223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.242 [2024-10-08 18:35:33.320230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.325370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.325396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.325404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.331116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.331137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.331145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.338623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.338645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.338653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.345919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.345941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.345949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.243 5177.00 IOPS, 647.12 MiB/s [2024-10-08T16:35:33.566Z] [2024-10-08 18:35:33.353763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.353789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.353796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.360953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.360975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.360983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.369137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.369160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.369168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.376441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.376462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.376471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.384126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.384149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.384159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.391529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.391551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.391560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.399112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.399133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.399141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.407086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.407108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.407115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.414705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.414727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.414734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.422477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.422499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.422507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.430455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.430475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.430483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.437917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.437939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.437947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.445548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.445570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.445578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.454049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.454070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.454078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.458462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.458482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.458490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.465395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.465416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.465424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.471951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.471971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.471979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.476915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.476936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.476947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.482106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.482126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.482134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.487386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.487405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.487413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.492625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.492644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.492652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.497919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.243 [2024-10-08 18:35:33.497939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.243 [2024-10-08 18:35:33.497947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.243 [2024-10-08 18:35:33.503197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.244 [2024-10-08 18:35:33.503218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.244 [2024-10-08 18:35:33.503226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.244 [2024-10-08 18:35:33.508428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.244 [2024-10-08 18:35:33.508449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.244 [2024-10-08 18:35:33.508457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.244 [2024-10-08 18:35:33.513748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.244 [2024-10-08 18:35:33.513769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.244 [2024-10-08 18:35:33.513776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.244 [2024-10-08 18:35:33.518655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.244 [2024-10-08 18:35:33.518676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.244 [2024-10-08 18:35:33.518684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.244 [2024-10-08 18:35:33.523924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.244 [2024-10-08 18:35:33.523945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.244 [2024-10-08 18:35:33.523953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.244 [2024-10-08 18:35:33.528969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.244 [2024-10-08 18:35:33.528989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.244 [2024-10-08 18:35:33.528996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.244 [2024-10-08 18:35:33.534184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.244 [2024-10-08 18:35:33.534205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.244 [2024-10-08 18:35:33.534213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.244 [2024-10-08 18:35:33.539586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.244 [2024-10-08 18:35:33.539607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.244 [2024-10-08 18:35:33.539615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.244 [2024-10-08 18:35:33.544777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.244 [2024-10-08 18:35:33.544797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.244 [2024-10-08 18:35:33.544805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.244 [2024-10-08 18:35:33.549978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.244 [2024-10-08 18:35:33.549998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.244 [2024-10-08 18:35:33.550006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.244 [2024-10-08 18:35:33.555255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.244 [2024-10-08 18:35:33.555277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.244 [2024-10-08 18:35:33.555285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.244 [2024-10-08 18:35:33.560571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.244 [2024-10-08 18:35:33.560592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.244 [2024-10-08 18:35:33.560599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.565812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.565835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.565846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.571151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.571173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.571182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.576446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.576469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.576477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.581758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.581780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.581788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.587032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.587054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.587064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.592577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.592598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.592607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.598356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.598386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.598395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.603608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.603629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.603637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.608846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.608866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.608874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.614085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.614109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.614117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.619360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.619386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.619394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.624629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.624649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.624657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.629844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.629864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.629872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.635139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.635161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.635170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.640602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.640623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.640632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.645826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.645847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.645857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.651038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.651059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.651066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.656305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.656326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.656334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.661566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.661587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.661594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.666847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.666868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.505 [2024-10-08 18:35:33.666876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.505 [2024-10-08 18:35:33.672080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.505 [2024-10-08 18:35:33.672100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.672107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.677311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.677332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.677339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.682588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.682609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.682617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.687824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.687844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.687852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.693042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.693063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.693071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.698279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.698300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.698308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.703495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.703515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.703526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.708674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.708695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.708702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.713916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.713937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.713945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.719144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.719165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.719172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.724474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.724495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.724503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.729662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.729683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.729691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.734920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.734941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.734948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.740186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.740206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.740214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.745471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.745492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.745500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.750704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.750728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.750735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.755920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.755941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.755949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.761153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.761173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.761181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.766354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.766381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.766389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.771500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.771521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.771529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.776742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.776762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.776770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.781901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.781922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.781930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.787067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.787088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.787096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.792327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.792349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.792356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.797538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.797560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.797568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.802797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.802817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.802825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.807976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.807997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.808005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.813138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.813159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.813166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.818367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.506 [2024-10-08 18:35:33.818394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.506 [2024-10-08 18:35:33.818402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.506 [2024-10-08 18:35:33.823617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.507 [2024-10-08 18:35:33.823638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.507 [2024-10-08 18:35:33.823646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.766 [2024-10-08 18:35:33.828885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.766 [2024-10-08 18:35:33.828906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.766 [2024-10-08 18:35:33.828914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.766 [2024-10-08 18:35:33.834097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.766 [2024-10-08 18:35:33.834118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.766 [2024-10-08 18:35:33.834126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.766 [2024-10-08 18:35:33.839332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.766 [2024-10-08 18:35:33.839353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.766 [2024-10-08 18:35:33.839364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.766 [2024-10-08 18:35:33.844568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.766 [2024-10-08 18:35:33.844589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.766 [2024-10-08 18:35:33.844597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.766 [2024-10-08 18:35:33.849796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.849816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.849823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.855108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.855128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.855136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.860395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.860415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.860423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.865618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.865639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.865646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.870898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.870918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.870926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.876120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.876140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.876148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.881358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.881385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.881394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.886641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.886664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.886673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.891900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.891922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.891930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.897133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.897154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.897162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.902397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.902418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.902425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.907600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.907621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.907629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.912792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.912813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.912821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.917990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.918011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.918019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.923278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.923300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.923308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.928536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.928557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.928568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.932028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.932048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.932056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.936291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.936312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.936320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.941530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.941551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.941558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.946664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.946685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.946693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.951735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.951756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.951764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.956879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.956910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.956918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.962131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.962151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.962159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.967336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.967359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.967367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.972583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.972608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.972615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.977852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.977873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.977880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.983028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.983048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.983056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.988357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.767 [2024-10-08 18:35:33.988385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.767 [2024-10-08 18:35:33.988394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.767 [2024-10-08 18:35:33.993661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:33.993681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:33.993690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.768 [2024-10-08 18:35:33.999345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:33.999367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:33.999380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.768 [2024-10-08 18:35:34.005261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:34.005282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:34.005290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.768 [2024-10-08 18:35:34.010522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:34.010542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:34.010549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.768 [2024-10-08 18:35:34.015726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:34.015746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:34.015754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.768 [2024-10-08 18:35:34.020919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:34.020940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:34.020947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.768 [2024-10-08 18:35:34.025873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:34.025894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:34.025902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.768 [2024-10-08 18:35:34.030886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:34.030908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:34.030915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.768 [2024-10-08 18:35:34.036002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:34.036022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:34.036029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.768 [2024-10-08 18:35:34.041044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:34.041064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:34.041072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.768 [2024-10-08 18:35:34.046172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:34.046192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:34.046216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.768 [2024-10-08 18:35:34.051425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:34.051445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:34.051453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.768 [2024-10-08 18:35:34.056611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:34.056631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:34.056639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.768 [2024-10-08 18:35:34.061745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:34.061765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:34.061776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.768 [2024-10-08 18:35:34.066913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:34.066933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:34.066942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.768 [2024-10-08 18:35:34.072240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:34.072261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:34.072269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.768 [2024-10-08 18:35:34.077468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:34.077490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:34.077498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.768 [2024-10-08 18:35:34.082723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:40.768 [2024-10-08 18:35:34.082743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.768 [2024-10-08 18:35:34.082751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.028 [2024-10-08 18:35:34.087967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.028 [2024-10-08 18:35:34.087989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.028 [2024-10-08 18:35:34.087996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.028 [2024-10-08 18:35:34.093249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.028 [2024-10-08 18:35:34.093269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.028 [2024-10-08 18:35:34.093277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.028 [2024-10-08 18:35:34.098473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.028 [2024-10-08 18:35:34.098493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.028 [2024-10-08 18:35:34.098501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.028 [2024-10-08 18:35:34.103662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.028 [2024-10-08 18:35:34.103683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.028 [2024-10-08 18:35:34.103690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.028 [2024-10-08 18:35:34.108831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.028 [2024-10-08 18:35:34.108859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.028 [2024-10-08 18:35:34.108866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.028 [2024-10-08 18:35:34.114119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.028 [2024-10-08 18:35:34.114139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.028 [2024-10-08 18:35:34.114147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.028 [2024-10-08 18:35:34.119391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.028 [2024-10-08 18:35:34.119412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.028 [2024-10-08 18:35:34.119420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.028 [2024-10-08 18:35:34.124612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.028 [2024-10-08 18:35:34.124632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.028 [2024-10-08 18:35:34.124640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.028 [2024-10-08 18:35:34.129869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.028 [2024-10-08 18:35:34.129890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.028 [2024-10-08 18:35:34.129898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.028 [2024-10-08 18:35:34.134673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.028 [2024-10-08 18:35:34.134693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.028 [2024-10-08 18:35:34.134700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.028 [2024-10-08 18:35:34.137484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.028 [2024-10-08 18:35:34.137507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.028 [2024-10-08 18:35:34.137516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.028 [2024-10-08 18:35:34.142568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.028 [2024-10-08 18:35:34.142590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.028 [2024-10-08 18:35:34.142598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.147620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.147641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.147653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.152662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.152682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.152690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.157037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.157058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.157065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.161982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.162002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.162010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.167069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.167088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.167096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.172263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.172283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.172291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.177477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.177498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.177505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.182698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.182718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.182726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.187913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.187933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.187941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.193131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.193155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.193164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.198372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.198410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.198420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.203638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.203659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.203667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.208844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.208864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.208872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.214055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.214075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.214083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.219307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.219328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.219336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.224527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.224548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.224556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.229943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.229964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.229972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.235455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.235475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.235483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.241165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.241186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.241193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.246785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.246806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.246814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.252516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.252536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.252544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.257949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.257968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.257976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.263702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.263721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.263729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.269190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.269210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.269218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.274820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.274841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.274850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.280595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.280616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.280624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.286255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.286276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.029 [2024-10-08 18:35:34.286288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.029 [2024-10-08 18:35:34.291921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.029 [2024-10-08 18:35:34.291942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.030 [2024-10-08 18:35:34.291949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.030 [2024-10-08 18:35:34.297896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.030 [2024-10-08 18:35:34.297916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.030 [2024-10-08 18:35:34.297924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.030 [2024-10-08 18:35:34.303450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.030 [2024-10-08 18:35:34.303471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.030 [2024-10-08 18:35:34.303479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.030 [2024-10-08 18:35:34.308921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.030 [2024-10-08 18:35:34.308942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.030 [2024-10-08 18:35:34.308950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.030 [2024-10-08 18:35:34.314481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.030 [2024-10-08 18:35:34.314501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.030 [2024-10-08 18:35:34.314508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.030 [2024-10-08 18:35:34.320032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.030 [2024-10-08 18:35:34.320052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.030 [2024-10-08 18:35:34.320060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.030 [2024-10-08 18:35:34.325481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.030 [2024-10-08 18:35:34.325501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.030 [2024-10-08 18:35:34.325509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.030 [2024-10-08 18:35:34.331099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.030 [2024-10-08 18:35:34.331118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.030 [2024-10-08 18:35:34.331125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.030 [2024-10-08 18:35:34.336974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.030 [2024-10-08 18:35:34.337004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.030 [2024-10-08 18:35:34.337012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.030 [2024-10-08 18:35:34.342487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.030 [2024-10-08 18:35:34.342507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.030 [2024-10-08 18:35:34.342515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.030 [2024-10-08 18:35:34.348082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.030 [2024-10-08 18:35:34.348102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.030 [2024-10-08 18:35:34.348111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.289 5436.50 IOPS, 679.56 MiB/s [2024-10-08T16:35:34.612Z] [2024-10-08 18:35:34.355296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c6600) 00:27:41.289 [2024-10-08 18:35:34.355315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.289 [2024-10-08 18:35:34.355323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.289 00:27:41.289 Latency(us) 00:27:41.289 [2024-10-08T16:35:34.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.289 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:41.289 nvme0n1 : 2.00 5435.28 679.41 0.00 0.00 2940.57 647.56 8862.96 00:27:41.289 [2024-10-08T16:35:34.612Z] =================================================================================================================== 00:27:41.289 [2024-10-08T16:35:34.612Z] Total : 5435.28 679.41 0.00 0.00 2940.57 647.56 8862.96 00:27:41.289 { 00:27:41.289 "results": [ 00:27:41.289 { 00:27:41.289 "job": "nvme0n1", 00:27:41.289 "core_mask": "0x2", 00:27:41.289 "workload": "randread", 00:27:41.289 "status": "finished", 00:27:41.289 "queue_depth": 16, 00:27:41.289 "io_size": 131072, 00:27:41.289 "runtime": 2.003393, 00:27:41.289 "iops": 5435.279049093213, 00:27:41.289 "mibps": 679.4098811366516, 00:27:41.289 "io_failed": 0, 00:27:41.289 "io_timeout": 0, 00:27:41.289 "avg_latency_us": 2940.567746743109, 00:27:41.289 "min_latency_us": 647.5580952380952, 00:27:41.289 "max_latency_us": 8862.96380952381 00:27:41.289 } 00:27:41.289 ], 00:27:41.289 "core_count": 1 00:27:41.289 } 00:27:41.289 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:41.289 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:41.289 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:41.289 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:41.289 | .driver_specific 00:27:41.289 | .nvme_error 00:27:41.289 | .status_code 00:27:41.289 | .command_transient_transport_error' 00:27:41.289 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 351 > 0 )) 00:27:41.289 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 570974 00:27:41.289 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 570974 ']' 00:27:41.289 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 570974 00:27:41.289 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:41.289 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:41.289 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 570974 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 570974' 00:27:41.548 killing process with pid 570974 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 570974 00:27:41.548 Received shutdown signal, test time was about 2.000000 seconds 00:27:41.548 00:27:41.548 Latency(us) 00:27:41.548 [2024-10-08T16:35:34.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.548 [2024-10-08T16:35:34.871Z] =================================================================================================================== 00:27:41.548 [2024-10-08T16:35:34.871Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 570974 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=571649 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 571649 /var/tmp/bperf.sock 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 571649 ']' 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:41.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:41.548 18:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.548 [2024-10-08 18:35:34.864590] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:27:41.548 [2024-10-08 18:35:34.864634] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid571649 ] 00:27:41.807 [2024-10-08 18:35:34.932946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.807 [2024-10-08 18:35:35.000587] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.741 18:35:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:42.742 18:35:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:42.742 18:35:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:42.742 18:35:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:42.742 18:35:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:42.742 18:35:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.742 18:35:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:42.742 18:35:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.742 18:35:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:42.742 18:35:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:43.000 nvme0n1 00:27:43.000 18:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:43.000 18:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.000 18:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:43.000 18:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.000 18:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:43.000 18:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:43.000 Running I/O for 2 seconds... 00:27:43.000 [2024-10-08 18:35:36.266608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f6458 00:27:43.000 [2024-10-08 18:35:36.267325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.000 [2024-10-08 18:35:36.267355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:43.000 [2024-10-08 18:35:36.277533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fef90 00:27:43.000 [2024-10-08 18:35:36.278833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.000 [2024-10-08 18:35:36.278856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.000 [2024-10-08 18:35:36.285612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198de470 00:27:43.000 [2024-10-08 18:35:36.286444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.000 [2024-10-08 18:35:36.286463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:43.000 [2024-10-08 18:35:36.295383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e95a0 00:27:43.000 [2024-10-08 18:35:36.296191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.000 [2024-10-08 18:35:36.296210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.000 [2024-10-08 18:35:36.306324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:43.001 [2024-10-08 18:35:36.307337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.001 [2024-10-08 18:35:36.307356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:43.001 [2024-10-08 18:35:36.315092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f6020 00:27:43.001 [2024-10-08 18:35:36.316163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.001 [2024-10-08 18:35:36.316182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.324674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:43.260 [2024-10-08 18:35:36.325567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.325586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.333997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:43.260 [2024-10-08 18:35:36.334832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.334851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.343408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:43.260 [2024-10-08 18:35:36.344222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.344240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.352749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:43.260 [2024-10-08 18:35:36.353574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.353593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.362191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:43.260 [2024-10-08 18:35:36.362985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.363003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.371520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:43.260 [2024-10-08 18:35:36.372325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.372344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.380871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:43.260 [2024-10-08 18:35:36.381726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.381744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.390198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:43.260 [2024-10-08 18:35:36.391057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.391076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.399581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:43.260 [2024-10-08 18:35:36.400365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.400389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.408908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:43.260 [2024-10-08 18:35:36.409768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.409787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.418269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:43.260 [2024-10-08 18:35:36.419091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.419111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.427568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:43.260 [2024-10-08 18:35:36.428283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.428302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.436247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e9e10 00:27:43.260 [2024-10-08 18:35:36.436972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.436991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.446474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e9e10 00:27:43.260 [2024-10-08 18:35:36.447265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.447285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.455868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e9e10 00:27:43.260 [2024-10-08 18:35:36.456689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.456707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.465228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e9e10 00:27:43.260 [2024-10-08 18:35:36.466044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.466067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.474624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e9e10 00:27:43.260 [2024-10-08 18:35:36.475465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.475484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.484077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ee190 00:27:43.260 [2024-10-08 18:35:36.484909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.484928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.493297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ee190 00:27:43.260 [2024-10-08 18:35:36.494127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.260 [2024-10-08 18:35:36.494145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:43.260 [2024-10-08 18:35:36.503816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ee190 00:27:43.260 [2024-10-08 18:35:36.505035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.261 [2024-10-08 18:35:36.505054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:43.261 [2024-10-08 18:35:36.513134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ea680 00:27:43.261 [2024-10-08 18:35:36.514348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.261 [2024-10-08 18:35:36.514366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:43.261 [2024-10-08 18:35:36.521329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198eea00 00:27:43.261 [2024-10-08 18:35:36.522109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.261 [2024-10-08 18:35:36.522127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:43.261 [2024-10-08 18:35:36.531206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198eaef0 00:27:43.261 [2024-10-08 18:35:36.532110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.261 [2024-10-08 18:35:36.532131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:43.261 [2024-10-08 18:35:36.540076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f92c0 00:27:43.261 [2024-10-08 18:35:36.540991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.261 [2024-10-08 18:35:36.541009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:43.261 [2024-10-08 18:35:36.550168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e5220 00:27:43.261 [2024-10-08 18:35:36.551088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.261 [2024-10-08 18:35:36.551108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:43.261 [2024-10-08 18:35:36.559500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e9e10 00:27:43.261 [2024-10-08 18:35:36.560476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.261 [2024-10-08 18:35:36.560495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:43.261 [2024-10-08 18:35:36.568861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fb048 00:27:43.261 [2024-10-08 18:35:36.569766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.261 [2024-10-08 18:35:36.569785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:43.261 [2024-10-08 18:35:36.578868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f6890 00:27:43.261 [2024-10-08 18:35:36.579882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.261 [2024-10-08 18:35:36.579901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.588302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e8d30 00:27:43.521 [2024-10-08 18:35:36.589422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.589440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.597110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f7538 00:27:43.521 [2024-10-08 18:35:36.597803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.597822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.606517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fac10 00:27:43.521 [2024-10-08 18:35:36.607176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.607195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.616031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ec408 00:27:43.521 [2024-10-08 18:35:36.616578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.616597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.625608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f5be8 00:27:43.521 [2024-10-08 18:35:36.626528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.626546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.635068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fd208 00:27:43.521 [2024-10-08 18:35:36.636036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.636056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.645674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fd208 00:27:43.521 [2024-10-08 18:35:36.647074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.647092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.655587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e27f0 00:27:43.521 [2024-10-08 18:35:36.657062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.657080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.663991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ee190 00:27:43.521 [2024-10-08 18:35:36.665036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.665054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.672674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fac10 00:27:43.521 [2024-10-08 18:35:36.673562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.673580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.682618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fac10 00:27:43.521 [2024-10-08 18:35:36.683646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.683665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.691747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198dfdc0 00:27:43.521 [2024-10-08 18:35:36.692693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.692711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.700782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ed4e8 00:27:43.521 [2024-10-08 18:35:36.701652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.701671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.711939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ed4e8 00:27:43.521 [2024-10-08 18:35:36.713411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.713433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.720743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198df550 00:27:43.521 [2024-10-08 18:35:36.721770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.721788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.729293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198eff18 00:27:43.521 [2024-10-08 18:35:36.730532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.730551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.737308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e84c0 00:27:43.521 [2024-10-08 18:35:36.738009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.738027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.747199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f4298 00:27:43.521 [2024-10-08 18:35:36.748051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.748069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.757069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198dece0 00:27:43.521 [2024-10-08 18:35:36.758010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.758029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.766915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e5220 00:27:43.521 [2024-10-08 18:35:36.767958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.767977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.778076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e5220 00:27:43.521 [2024-10-08 18:35:36.779661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.521 [2024-10-08 18:35:36.779678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:43.521 [2024-10-08 18:35:36.786478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f7da8 00:27:43.521 [2024-10-08 18:35:36.787300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.522 [2024-10-08 18:35:36.787320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:43.522 [2024-10-08 18:35:36.795500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e9e10 00:27:43.522 [2024-10-08 18:35:36.796331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.522 [2024-10-08 18:35:36.796351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:43.522 [2024-10-08 18:35:36.806541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e95a0 00:27:43.522 [2024-10-08 18:35:36.808106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.522 [2024-10-08 18:35:36.808126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:43.522 [2024-10-08 18:35:36.816389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198eff18 00:27:43.522 [2024-10-08 18:35:36.818077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.522 [2024-10-08 18:35:36.818094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.522 [2024-10-08 18:35:36.823095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e38d0 00:27:43.522 [2024-10-08 18:35:36.823859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.522 [2024-10-08 18:35:36.823877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:43.522 [2024-10-08 18:35:36.831912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198df550 00:27:43.522 [2024-10-08 18:35:36.832699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.522 [2024-10-08 18:35:36.832718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:43.522 [2024-10-08 18:35:36.841849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ee5c8 00:27:43.781 [2024-10-08 18:35:36.842781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.781 [2024-10-08 18:35:36.842801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.781 [2024-10-08 18:35:36.851754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f46d0 00:27:43.781 [2024-10-08 18:35:36.852771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.781 [2024-10-08 18:35:36.852806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:43.781 [2024-10-08 18:35:36.861707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e9168 00:27:43.781 [2024-10-08 18:35:36.862905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.781 [2024-10-08 18:35:36.862924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.781 [2024-10-08 18:35:36.871504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f0788 00:27:43.781 [2024-10-08 18:35:36.872767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.781 [2024-10-08 18:35:36.872786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:43.781 [2024-10-08 18:35:36.880249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e5658 00:27:43.781 [2024-10-08 18:35:36.881132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.781 [2024-10-08 18:35:36.881150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.781 [2024-10-08 18:35:36.889526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ec840 00:27:43.781 [2024-10-08 18:35:36.890463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.781 [2024-10-08 18:35:36.890482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.781 [2024-10-08 18:35:36.898891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fc998 00:27:43.782 [2024-10-08 18:35:36.899813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:36.899831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:36.908225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fb8b8 00:27:43.782 [2024-10-08 18:35:36.909127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:36.909145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:36.917687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e49b0 00:27:43.782 [2024-10-08 18:35:36.918637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:36.918654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:36.927149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f5be8 00:27:43.782 [2024-10-08 18:35:36.928077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:36.928095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:36.936486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e4de8 00:27:43.782 [2024-10-08 18:35:36.937386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:36.937420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:36.945861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f0ff8 00:27:43.782 [2024-10-08 18:35:36.946810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:36.946828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:36.955248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198eff18 00:27:43.782 [2024-10-08 18:35:36.956149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:36.956167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:36.964647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198de038 00:27:43.782 [2024-10-08 18:35:36.965570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:36.965589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:36.974013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e95a0 00:27:43.782 [2024-10-08 18:35:36.974963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:36.974982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:36.983430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198edd58 00:27:43.782 [2024-10-08 18:35:36.984358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:36.984379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:36.992819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e5ec8 00:27:43.782 [2024-10-08 18:35:36.993744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:36.993766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:37.002200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fe720 00:27:43.782 [2024-10-08 18:35:37.003147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:37.003165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:37.011613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e4140 00:27:43.782 [2024-10-08 18:35:37.012566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:37.012586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:37.021147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ed4e8 00:27:43.782 [2024-10-08 18:35:37.022049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:37.022068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:37.030524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ef270 00:27:43.782 [2024-10-08 18:35:37.031462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:37.031483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:37.040101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f4f40 00:27:43.782 [2024-10-08 18:35:37.041074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:37.041096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:37.049596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f2948 00:27:43.782 [2024-10-08 18:35:37.050435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:37.050454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:37.058818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f2948 00:27:43.782 [2024-10-08 18:35:37.059788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:37.059807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:37.068370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f2948 00:27:43.782 [2024-10-08 18:35:37.069274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:37.069293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:37.077676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f2948 00:27:43.782 [2024-10-08 18:35:37.078603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:37.078622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:37.087022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f2948 00:27:43.782 [2024-10-08 18:35:37.087999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:37.088017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:43.782 [2024-10-08 18:35:37.096404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f2948 00:27:43.782 [2024-10-08 18:35:37.097302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.782 [2024-10-08 18:35:37.097320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.105856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f2948 00:27:44.045 [2024-10-08 18:35:37.106818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.106837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.115278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f2948 00:27:44.045 [2024-10-08 18:35:37.116217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.116235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.124639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f2948 00:27:44.045 [2024-10-08 18:35:37.125549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.125567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.133961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f2948 00:27:44.045 [2024-10-08 18:35:37.134894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.134913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.143295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f2948 00:27:44.045 [2024-10-08 18:35:37.144259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.144278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.152660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f2948 00:27:44.045 [2024-10-08 18:35:37.153626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.153644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.163210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f2948 00:27:44.045 [2024-10-08 18:35:37.164571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.164589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.172535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f4f40 00:27:44.045 [2024-10-08 18:35:37.173966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.173984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.182452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198de8a8 00:27:44.045 [2024-10-08 18:35:37.184022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.184040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.192415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f7970 00:27:44.045 [2024-10-08 18:35:37.194133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.194151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.199080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fcdd0 00:27:44.045 [2024-10-08 18:35:37.199880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.199898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.210252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f1430 00:27:44.045 [2024-10-08 18:35:37.211526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.211545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.217949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f7970 00:27:44.045 [2024-10-08 18:35:37.218649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.218669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.227563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e9e10 00:27:44.045 [2024-10-08 18:35:37.228452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.228470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.237045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f5378 00:27:44.045 [2024-10-08 18:35:37.237929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.237948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.246446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198eaab8 00:27:44.045 [2024-10-08 18:35:37.247357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.247381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.255862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f20d8 00:27:44.045 [2024-10-08 18:35:37.256854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.256872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:44.045 26993.00 IOPS, 105.44 MiB/s [2024-10-08T16:35:37.368Z] [2024-10-08 18:35:37.264544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e7818 00:27:44.045 [2024-10-08 18:35:37.265388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.265406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.274328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e4de8 00:27:44.045 [2024-10-08 18:35:37.275359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.275383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.284234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198eaef0 00:27:44.045 [2024-10-08 18:35:37.285388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.285410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.294206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e5ec8 00:27:44.045 [2024-10-08 18:35:37.295493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.295513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.302965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f3a28 00:27:44.045 [2024-10-08 18:35:37.303887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.303906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.312283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198edd58 00:27:44.045 [2024-10-08 18:35:37.313227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.313246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.321578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198edd58 00:27:44.045 [2024-10-08 18:35:37.322467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.322486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.330899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198edd58 00:27:44.045 [2024-10-08 18:35:37.331852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.331870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.340234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198edd58 00:27:44.045 [2024-10-08 18:35:37.341163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.341181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.349595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198edd58 00:27:44.045 [2024-10-08 18:35:37.350515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.045 [2024-10-08 18:35:37.350533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:44.045 [2024-10-08 18:35:37.358989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198edd58 00:27:44.045 [2024-10-08 18:35:37.359890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.046 [2024-10-08 18:35:37.359909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:44.365 [2024-10-08 18:35:37.368534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198edd58 00:27:44.365 [2024-10-08 18:35:37.369353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.365 [2024-10-08 18:35:37.369372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:44.365 [2024-10-08 18:35:37.378373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198de470 00:27:44.365 [2024-10-08 18:35:37.379146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.365 [2024-10-08 18:35:37.379164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:44.365 [2024-10-08 18:35:37.388407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e0ea0 00:27:44.365 [2024-10-08 18:35:37.389196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.365 [2024-10-08 18:35:37.389215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:44.365 [2024-10-08 18:35:37.398268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ed0b0 00:27:44.365 [2024-10-08 18:35:37.399334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.365 [2024-10-08 18:35:37.399353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.365 [2024-10-08 18:35:37.408865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ed0b0 00:27:44.365 [2024-10-08 18:35:37.410551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.365 [2024-10-08 18:35:37.410568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.365 [2024-10-08 18:35:37.417299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e1b48 00:27:44.365 [2024-10-08 18:35:37.418129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.365 [2024-10-08 18:35:37.418147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:44.365 [2024-10-08 18:35:37.426133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f92c0 00:27:44.365 [2024-10-08 18:35:37.426851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.365 [2024-10-08 18:35:37.426870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:44.365 [2024-10-08 18:35:37.435828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198de470 00:27:44.365 [2024-10-08 18:35:37.436786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.365 [2024-10-08 18:35:37.436806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:44.365 [2024-10-08 18:35:37.444411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f6458 00:27:44.365 [2024-10-08 18:35:37.445424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.365 [2024-10-08 18:35:37.445442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:44.365 [2024-10-08 18:35:37.454390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3060 00:27:44.365 [2024-10-08 18:35:37.455439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.365 [2024-10-08 18:35:37.455458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:44.365 [2024-10-08 18:35:37.463711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e9168 00:27:44.365 [2024-10-08 18:35:37.464867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.365 [2024-10-08 18:35:37.464886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:44.365 [2024-10-08 18:35:37.473505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e23b8 00:27:44.365 [2024-10-08 18:35:37.474758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.365 [2024-10-08 18:35:37.474776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:44.365 [2024-10-08 18:35:37.483507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f0bc0 00:27:44.365 [2024-10-08 18:35:37.484794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.365 [2024-10-08 18:35:37.484813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.365 [2024-10-08 18:35:37.492805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e8d30 00:27:44.365 [2024-10-08 18:35:37.494179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.365 [2024-10-08 18:35:37.494197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:44.365 [2024-10-08 18:35:37.502226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e9e10 00:27:44.365 [2024-10-08 18:35:37.503251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.365 [2024-10-08 18:35:37.503269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:44.365 [2024-10-08 18:35:37.510810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e1710 00:27:44.365 [2024-10-08 18:35:37.511959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.365 [2024-10-08 18:35:37.511977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:44.365 [2024-10-08 18:35:37.520911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:44.365 [2024-10-08 18:35:37.521948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.365 [2024-10-08 18:35:37.521966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:44.366 [2024-10-08 18:35:37.530285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e7818 00:27:44.366 [2024-10-08 18:35:37.531389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.366 [2024-10-08 18:35:37.531427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:44.366 [2024-10-08 18:35:37.539591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198feb58 00:27:44.366 [2024-10-08 18:35:37.540538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.366 [2024-10-08 18:35:37.540560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:44.366 [2024-10-08 18:35:37.549741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:44.366 [2024-10-08 18:35:37.550792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.366 [2024-10-08 18:35:37.550812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:44.366 [2024-10-08 18:35:37.558890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ed0b0 00:27:44.366 [2024-10-08 18:35:37.560063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.366 [2024-10-08 18:35:37.560082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:44.366 [2024-10-08 18:35:37.568489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e6fa8 00:27:44.366 [2024-10-08 18:35:37.569413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.366 [2024-10-08 18:35:37.569432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:44.366 [2024-10-08 18:35:37.578347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fe720 00:27:44.366 [2024-10-08 18:35:37.579429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.366 [2024-10-08 18:35:37.579448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:44.366 [2024-10-08 18:35:37.587696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:44.366 [2024-10-08 18:35:37.588830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.366 [2024-10-08 18:35:37.588849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:44.366 [2024-10-08 18:35:37.597014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e9e10 00:27:44.366 [2024-10-08 18:35:37.598073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.366 [2024-10-08 18:35:37.598092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:44.366 [2024-10-08 18:35:37.605722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f0350 00:27:44.366 [2024-10-08 18:35:37.606837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.366 [2024-10-08 18:35:37.606856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:44.366 [2024-10-08 18:35:37.615523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e0ea0 00:27:44.366 [2024-10-08 18:35:37.616779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.366 [2024-10-08 18:35:37.616797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:44.366 [2024-10-08 18:35:37.625324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ebfd0 00:27:44.366 [2024-10-08 18:35:37.626773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.366 [2024-10-08 18:35:37.626790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:44.366 [2024-10-08 18:35:37.633891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198df118 00:27:44.366 [2024-10-08 18:35:37.634941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.366 [2024-10-08 18:35:37.634960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:44.366 [2024-10-08 18:35:37.643535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198feb58 00:27:44.366 [2024-10-08 18:35:37.644191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.366 [2024-10-08 18:35:37.644210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:44.366 [2024-10-08 18:35:37.652367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e4578 00:27:44.366 [2024-10-08 18:35:37.652950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.366 [2024-10-08 18:35:37.652968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:44.366 [2024-10-08 18:35:37.664151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3498 00:27:44.366 [2024-10-08 18:35:37.665813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.366 [2024-10-08 18:35:37.665831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:44.366 [2024-10-08 18:35:37.670971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198eb328 00:27:44.689 [2024-10-08 18:35:37.671737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.689 [2024-10-08 18:35:37.671758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:44.689 [2024-10-08 18:35:37.683126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e0ea0 00:27:44.689 [2024-10-08 18:35:37.684777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.689 [2024-10-08 18:35:37.684795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:44.689 [2024-10-08 18:35:37.689829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e9e10 00:27:44.689 [2024-10-08 18:35:37.690590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.689 [2024-10-08 18:35:37.690609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:44.689 [2024-10-08 18:35:37.699272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f7da8 00:27:44.689 [2024-10-08 18:35:37.700028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.689 [2024-10-08 18:35:37.700047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:44.689 [2024-10-08 18:35:37.710492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fd640 00:27:44.689 [2024-10-08 18:35:37.711796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.689 [2024-10-08 18:35:37.711814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:44.689 [2024-10-08 18:35:37.720035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198df550 00:27:44.689 [2024-10-08 18:35:37.721362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.689 [2024-10-08 18:35:37.721383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:44.689 [2024-10-08 18:35:37.729212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e4578 00:27:44.689 [2024-10-08 18:35:37.730472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.689 [2024-10-08 18:35:37.730490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:44.689 [2024-10-08 18:35:37.738786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f7100 00:27:44.689 [2024-10-08 18:35:37.740093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.689 [2024-10-08 18:35:37.740112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:44.689 [2024-10-08 18:35:37.746987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fc560 00:27:44.689 [2024-10-08 18:35:37.747688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.689 [2024-10-08 18:35:37.747707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:44.689 [2024-10-08 18:35:37.758332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198efae0 00:27:44.689 [2024-10-08 18:35:37.759923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.689 [2024-10-08 18:35:37.759941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:44.689 [2024-10-08 18:35:37.765087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e4578 00:27:44.689 [2024-10-08 18:35:37.765865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.689 [2024-10-08 18:35:37.765883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:44.689 [2024-10-08 18:35:37.776456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fc560 00:27:44.689 [2024-10-08 18:35:37.777739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.689 [2024-10-08 18:35:37.777758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:44.689 [2024-10-08 18:35:37.785950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ff3c8 00:27:44.690 [2024-10-08 18:35:37.787277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.787296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.794589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ecc78 00:27:44.690 [2024-10-08 18:35:37.796098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.796121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.802928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fc128 00:27:44.690 [2024-10-08 18:35:37.803754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.803774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.812734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ec408 00:27:44.690 [2024-10-08 18:35:37.813688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.813708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.822538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f20d8 00:27:44.690 [2024-10-08 18:35:37.823572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.823590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.833685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fe2e8 00:27:44.690 [2024-10-08 18:35:37.835279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.835298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.840404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f8a50 00:27:44.690 [2024-10-08 18:35:37.841206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.841225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.850687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f6890 00:27:44.690 [2024-10-08 18:35:37.851392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.851412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.861657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e1710 00:27:44.690 [2024-10-08 18:35:37.862909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.862932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.870803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f6890 00:27:44.690 [2024-10-08 18:35:37.872054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.872074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.880502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f1ca0 00:27:44.690 [2024-10-08 18:35:37.881772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.881791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.888322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e1f80 00:27:44.690 [2024-10-08 18:35:37.888794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.888813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.899078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e7818 00:27:44.690 [2024-10-08 18:35:37.900273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.900292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.908885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e49b0 00:27:44.690 [2024-10-08 18:35:37.910101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.910119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.916814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198eea00 00:27:44.690 [2024-10-08 18:35:37.917415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.917434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.926682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e4140 00:27:44.690 [2024-10-08 18:35:37.927387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.927406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.935515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f0788 00:27:44.690 [2024-10-08 18:35:37.936115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.936133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.945364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fda78 00:27:44.690 [2024-10-08 18:35:37.946054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.946074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.955388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e4578 00:27:44.690 [2024-10-08 18:35:37.956224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.956244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.964201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e5220 00:27:44.690 [2024-10-08 18:35:37.965044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.965063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:44.690 [2024-10-08 18:35:37.975488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fe720 00:27:44.690 [2024-10-08 18:35:37.977071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.690 [2024-10-08 18:35:37.977090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:37.984038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3498 00:27:44.950 [2024-10-08 18:35:37.984828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:37.984848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:37.993714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198de038 00:27:44.950 [2024-10-08 18:35:37.994846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:37.994868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.003829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e99d8 00:27:44.950 [2024-10-08 18:35:38.004581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.004601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.016472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fe720 00:27:44.950 [2024-10-08 18:35:38.018155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.018174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.025736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f8a50 00:27:44.950 [2024-10-08 18:35:38.026830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.026850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.035740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f6458 00:27:44.950 [2024-10-08 18:35:38.036729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.036748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.045328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:44.950 [2024-10-08 18:35:38.046320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.046339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.054918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198eaab8 00:27:44.950 [2024-10-08 18:35:38.055916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.055936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.064362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198de8a8 00:27:44.950 [2024-10-08 18:35:38.065364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.065391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.073740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ec408 00:27:44.950 [2024-10-08 18:35:38.074715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.074735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.082802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198de8a8 00:27:44.950 [2024-10-08 18:35:38.083900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.083919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.092506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ea248 00:27:44.950 [2024-10-08 18:35:38.093458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.093477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.101075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198fda78 00:27:44.950 [2024-10-08 18:35:38.102285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.102304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.110913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f6458 00:27:44.950 [2024-10-08 18:35:38.112278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.112312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.119123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e8088 00:27:44.950 [2024-10-08 18:35:38.119959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.119978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.129189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ee190 00:27:44.950 [2024-10-08 18:35:38.130015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.130034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.138168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198eee38 00:27:44.950 [2024-10-08 18:35:38.138869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.138888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.147521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e0a68 00:27:44.950 [2024-10-08 18:35:38.148259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.148277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.158330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ec408 00:27:44.950 [2024-10-08 18:35:38.159445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.159464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.168037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f9b30 00:27:44.950 [2024-10-08 18:35:38.169034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.950 [2024-10-08 18:35:38.169054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:44.950 [2024-10-08 18:35:38.176635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198eee38 00:27:44.950 [2024-10-08 18:35:38.177870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.951 [2024-10-08 18:35:38.177889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:44.951 [2024-10-08 18:35:38.186116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f35f0 00:27:44.951 [2024-10-08 18:35:38.187168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.951 [2024-10-08 18:35:38.187187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:44.951 [2024-10-08 18:35:38.195953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198de038 00:27:44.951 [2024-10-08 18:35:38.197115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.951 [2024-10-08 18:35:38.197135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:44.951 [2024-10-08 18:35:38.204762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e3d08 00:27:44.951 [2024-10-08 18:35:38.205490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.951 [2024-10-08 18:35:38.205509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:44.951 [2024-10-08 18:35:38.213913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e8d30 00:27:44.951 [2024-10-08 18:35:38.214559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.951 [2024-10-08 18:35:38.214579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:44.951 [2024-10-08 18:35:38.222704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e01f8 00:27:44.951 [2024-10-08 18:35:38.223321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.951 [2024-10-08 18:35:38.223340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:44.951 [2024-10-08 18:35:38.232599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198ebb98 00:27:44.951 [2024-10-08 18:35:38.233425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.951 [2024-10-08 18:35:38.233443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:44.951 [2024-10-08 18:35:38.242753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198e0a68 00:27:44.951 [2024-10-08 18:35:38.243496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.951 [2024-10-08 18:35:38.243515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:44.951 [2024-10-08 18:35:38.252142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff5c0) with pdu=0x2000198f8618 00:27:44.951 [2024-10-08 18:35:38.252888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.951 [2024-10-08 18:35:38.252907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:44.951 26945.50 IOPS, 105.26 MiB/s 00:27:44.951 Latency(us) 00:27:44.951 [2024-10-08T16:35:38.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.951 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:44.951 nvme0n1 : 2.00 26972.41 105.36 0.00 0.00 4741.68 2262.55 12607.88 00:27:44.951 [2024-10-08T16:35:38.274Z] =================================================================================================================== 00:27:44.951 [2024-10-08T16:35:38.274Z] Total : 26972.41 105.36 0.00 0.00 4741.68 2262.55 12607.88 00:27:44.951 { 00:27:44.951 "results": [ 00:27:44.951 { 00:27:44.951 "job": "nvme0n1", 00:27:44.951 "core_mask": "0x2", 00:27:44.951 "workload": "randwrite", 00:27:44.951 "status": "finished", 00:27:44.951 "queue_depth": 128, 00:27:44.951 "io_size": 4096, 00:27:44.951 "runtime": 2.00275, 00:27:44.951 "iops": 26972.4129322182, 00:27:44.951 "mibps": 105.36098801647735, 00:27:44.951 "io_failed": 0, 00:27:44.951 "io_timeout": 0, 00:27:44.951 "avg_latency_us": 4741.683119255218, 00:27:44.951 "min_latency_us": 2262.552380952381, 00:27:44.951 "max_latency_us": 12607.878095238095 00:27:44.951 } 00:27:44.951 ], 00:27:44.951 "core_count": 1 00:27:44.951 } 00:27:45.213 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:45.213 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:45.213 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:45.213 | .driver_specific 00:27:45.213 | .nvme_error 00:27:45.213 | .status_code 00:27:45.213 | .command_transient_transport_error' 00:27:45.213 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:45.213 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 211 > 0 )) 00:27:45.213 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 571649 00:27:45.213 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 571649 ']' 00:27:45.213 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 571649 00:27:45.213 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:45.213 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:45.213 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 571649 00:27:45.213 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:45.213 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:45.213 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 571649' 00:27:45.213 killing process with pid 571649 00:27:45.213 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 571649 00:27:45.213 Received shutdown signal, test time was about 2.000000 seconds 00:27:45.213 00:27:45.213 Latency(us) 00:27:45.213 [2024-10-08T16:35:38.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.213 [2024-10-08T16:35:38.536Z] =================================================================================================================== 00:27:45.213 [2024-10-08T16:35:38.536Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:45.213 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 571649 00:27:45.474 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:45.474 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:45.474 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:45.474 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:45.474 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:45.474 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=572350 00:27:45.474 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 572350 /var/tmp/bperf.sock 00:27:45.474 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:45.474 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 572350 ']' 00:27:45.474 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:45.474 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:45.474 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:45.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:45.474 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:45.474 18:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.474 [2024-10-08 18:35:38.759949] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:27:45.474 [2024-10-08 18:35:38.759999] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid572350 ] 00:27:45.474 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:45.474 Zero copy mechanism will not be used. 00:27:45.732 [2024-10-08 18:35:38.826132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.732 [2024-10-08 18:35:38.905075] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.299 18:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:46.299 18:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:46.299 18:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:46.299 18:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:46.558 18:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:46.558 18:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.558 18:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:46.558 18:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.558 18:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:46.558 18:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:46.816 nvme0n1 00:27:46.816 18:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:46.816 18:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.816 18:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:47.075 18:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.075 18:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:47.075 18:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:47.075 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:47.075 Zero copy mechanism will not be used. 00:27:47.075 Running I/O for 2 seconds... 00:27:47.075 [2024-10-08 18:35:40.243827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.075 [2024-10-08 18:35:40.244107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.075 [2024-10-08 18:35:40.244137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.075 [2024-10-08 18:35:40.248489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.075 [2024-10-08 18:35:40.248754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.075 [2024-10-08 18:35:40.248779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.075 [2024-10-08 18:35:40.253058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.075 [2024-10-08 18:35:40.253322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.075 [2024-10-08 18:35:40.253345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.075 [2024-10-08 18:35:40.257521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.075 [2024-10-08 18:35:40.257785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.075 [2024-10-08 18:35:40.257806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.075 [2024-10-08 18:35:40.261980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.075 [2024-10-08 18:35:40.262242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.262263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.267163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.267432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.267454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.273697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.273974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.273995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.280298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.280564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.280586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.286836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.287099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.287120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.293367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.293640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.293666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.299951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.300213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.300235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.306604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.306852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.306874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.313284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.313550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.313572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.320209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.320478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.320499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.326546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.326821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.326842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.332980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.333238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.333260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.339267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.339521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.339544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.345496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.345756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.345778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.351793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.352043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.352065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.358553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.358800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.358821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.364834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.365096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.365118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.370938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.371195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.371215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.377220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.377488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.377508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.383637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.383913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.383934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.389872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.390122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.390143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.076 [2024-10-08 18:35:40.396324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.076 [2024-10-08 18:35:40.396595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.076 [2024-10-08 18:35:40.396616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.336 [2024-10-08 18:35:40.402635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.336 [2024-10-08 18:35:40.402898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.336 [2024-10-08 18:35:40.402919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.336 [2024-10-08 18:35:40.408977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.336 [2024-10-08 18:35:40.409237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.409258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.415482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.415650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.415669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.422199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.422468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.422490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.427442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.427719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.427740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.432185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.432449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.432470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.437132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.437396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.437417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.442163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.442429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.442450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.448359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.448654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.448675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.453142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.453408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.453432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.457756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.458018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.458039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.462529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.462792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.462813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.467279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.467549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.467570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.471719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.471977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.471999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.476105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.476366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.476393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.480535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.480798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.480819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.484925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.485198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.485219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.489356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.489626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.489646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.493732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.493995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.494016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.498163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.498447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.498470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.502693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.502955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.502977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.507388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.507653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.507675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.511894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.512168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.512190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.516458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.516720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.516742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.521277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.521548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.521569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.525945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.526209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.526230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.530684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.337 [2024-10-08 18:35:40.530947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.337 [2024-10-08 18:35:40.530974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.337 [2024-10-08 18:35:40.536834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.537094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.537116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.544005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.544267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.544288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.550437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.550703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.550725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.557531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.557796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.557817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.563802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.564061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.564082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.568522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.568785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.568806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.573330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.573596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.573616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.578154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.578437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.578458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.582823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.583079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.583099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.587507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.587764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.587784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.592214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.592488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.592508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.597029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.597299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.597320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.601755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.602027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.602047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.606177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.606459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.606479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.610747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.611013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.611033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.615408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.615665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.615686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.620163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.620267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.620285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.625341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.625602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.625633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.629989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.630229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.630249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.634582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.634811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.634831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.639609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.639839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.639860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.644894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.645142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.645162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.649692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.649919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.649941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.338 [2024-10-08 18:35:40.654430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.338 [2024-10-08 18:35:40.654662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.338 [2024-10-08 18:35:40.654683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.599 [2024-10-08 18:35:40.659280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.599 [2024-10-08 18:35:40.659517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.599 [2024-10-08 18:35:40.659538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.599 [2024-10-08 18:35:40.664416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.599 [2024-10-08 18:35:40.664650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.599 [2024-10-08 18:35:40.664675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.599 [2024-10-08 18:35:40.669464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.599 [2024-10-08 18:35:40.669718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.599 [2024-10-08 18:35:40.669738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.599 [2024-10-08 18:35:40.674878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.599 [2024-10-08 18:35:40.675107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.599 [2024-10-08 18:35:40.675128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.599 [2024-10-08 18:35:40.679785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.599 [2024-10-08 18:35:40.680014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.599 [2024-10-08 18:35:40.680034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.599 [2024-10-08 18:35:40.684554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.599 [2024-10-08 18:35:40.684800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.599 [2024-10-08 18:35:40.684820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.599 [2024-10-08 18:35:40.689236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.599 [2024-10-08 18:35:40.689491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.599 [2024-10-08 18:35:40.689512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.599 [2024-10-08 18:35:40.694340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.599 [2024-10-08 18:35:40.694576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.599 [2024-10-08 18:35:40.694597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.599 [2024-10-08 18:35:40.699428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.699659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.699679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.704585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.704818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.704839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.709432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.709671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.709691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.714413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.714668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.714688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.719026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.719275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.719296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.724146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.724520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.724541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.729318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.729557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.729578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.734469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.734721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.734742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.739092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.739324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.739345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.743641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.743872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.743892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.748966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.749221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.749242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.753924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.754157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.754180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.758725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.758966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.758988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.763737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.763984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.764005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.768467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.768721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.768742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.773564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.773792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.773813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.778827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.779058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.779079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.783762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.783987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.784008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.788797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.789027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.789048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.793545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.793776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.793800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.798056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.798285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.798305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.802756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.803002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.803023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.808243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.808477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.808496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.812901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.813131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.813152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.817551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.817784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.817805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.822071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.822303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.822324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.826567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.826804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.826824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.831009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.831243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.831264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.600 [2024-10-08 18:35:40.835536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.600 [2024-10-08 18:35:40.835783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.600 [2024-10-08 18:35:40.835803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.840365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.840602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.840623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.844832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.845062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.845083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.849142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.849385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.849406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.853411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.853645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.853666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.857687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.857918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.857939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.861979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.862224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.862245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.866224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.866475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.866497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.870492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.870729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.870749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.875223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.875478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.875499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.879935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.880177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.880198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.885168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.885404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.885425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.891263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.891512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.891532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.896370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.896624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.896646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.901144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.901395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.901416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.905832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.906060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.906081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.910228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.910462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.910482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.914517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.914746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.914771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.601 [2024-10-08 18:35:40.919503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.601 [2024-10-08 18:35:40.919747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.601 [2024-10-08 18:35:40.919767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:40.925440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:40.925746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:40.925767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:40.931431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:40.931741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:40.931761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:40.937826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:40.938143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:40.938164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:40.944213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:40.944528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:40.944548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:40.951834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:40.952135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:40.952156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:40.959228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:40.959552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:40.959573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:40.966748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:40.967094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:40.967115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:40.974088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:40.974412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:40.974434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:40.981607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:40.981956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:40.981977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:40.989250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:40.989579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:40.989600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:40.996659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:40.996957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:40.996978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:41.004033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:41.004390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:41.004412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:41.011592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:41.011927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:41.011952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:41.018243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:41.018500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:41.018522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:41.023266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:41.023531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:41.023552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:41.028601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:41.028826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:41.028847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:41.033596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:41.033841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:41.033862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:41.038802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:41.039030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:41.039051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:41.043692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:41.043922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:41.043942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:41.048228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:41.048462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:41.048483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:41.052581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:41.052812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:41.052832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:41.056862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:41.057094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:41.057115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:41.061389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:41.061624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:41.061645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.862 [2024-10-08 18:35:41.065920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.862 [2024-10-08 18:35:41.066167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.862 [2024-10-08 18:35:41.066190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.070275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.070524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.070550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.074578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.074827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.074848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.078862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.079109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.079131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.083113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.083342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.083363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.087330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.087580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.087601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.091592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.091821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.091841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.095823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.096051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.096072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.100092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.100322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.100343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.104462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.104707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.104728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.108721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.108953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.108973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.112926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.113158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.113178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.117170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.117418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.117439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.121567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.121815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.121836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.125895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.126139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.126160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.130192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.130444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.130464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.134594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.134822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.134843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.138887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.139118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.139139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.143148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.143384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.143405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.147464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.147692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.147712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.151763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.152006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.152027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.156035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.156263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.156283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.160515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.160760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.160780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.165056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.165284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.165304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.170239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.170489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.170511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.175623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.175847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.175868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.863 [2024-10-08 18:35:41.180293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:47.863 [2024-10-08 18:35:41.180541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.863 [2024-10-08 18:35:41.180562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.124 [2024-10-08 18:35:41.184847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.124 [2024-10-08 18:35:41.185076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.124 [2024-10-08 18:35:41.185100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.124 [2024-10-08 18:35:41.189503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.124 [2024-10-08 18:35:41.189741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.124 [2024-10-08 18:35:41.189761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.124 [2024-10-08 18:35:41.194153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.124 [2024-10-08 18:35:41.194403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.124 [2024-10-08 18:35:41.194423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.124 [2024-10-08 18:35:41.198612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.124 [2024-10-08 18:35:41.198841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.124 [2024-10-08 18:35:41.198861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.124 [2024-10-08 18:35:41.203135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.124 [2024-10-08 18:35:41.203362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.124 [2024-10-08 18:35:41.203388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.124 [2024-10-08 18:35:41.207657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.124 [2024-10-08 18:35:41.207883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.124 [2024-10-08 18:35:41.207904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.124 [2024-10-08 18:35:41.212094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.124 [2024-10-08 18:35:41.212322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.124 [2024-10-08 18:35:41.212343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.124 [2024-10-08 18:35:41.216657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.124 [2024-10-08 18:35:41.216887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.124 [2024-10-08 18:35:41.216908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.124 [2024-10-08 18:35:41.221025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.124 [2024-10-08 18:35:41.221258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.124 [2024-10-08 18:35:41.221278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.124 [2024-10-08 18:35:41.225399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.124 [2024-10-08 18:35:41.225647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.124 [2024-10-08 18:35:41.225667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.124 [2024-10-08 18:35:41.229912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.124 [2024-10-08 18:35:41.230141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.124 [2024-10-08 18:35:41.230161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.124 [2024-10-08 18:35:41.234583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.124 [2024-10-08 18:35:41.234816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.124 [2024-10-08 18:35:41.234836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.124 6032.00 IOPS, 754.00 MiB/s [2024-10-08T16:35:41.448Z] [2024-10-08 18:35:41.240478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.240721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.240743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.245592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.245821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.245856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.250155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.250392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.250412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.254578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.254833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.254855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.259192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.259433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.259455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.263659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.263907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.263934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.268139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.268393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.268415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.272715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.272962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.272983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.277859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.278087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.278108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.282917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.283149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.283170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.287723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.287968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.287990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.292307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.292556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.292577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.297184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.297424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.297445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.303312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.303631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.303652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.310239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.310521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.310542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.316869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.317194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.317214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.323844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.324146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.324167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.331471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.331776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.331797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.338570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.338871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.338893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.346061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.346395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.346415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.353256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.353564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.353585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.359899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.360148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.360169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.365656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.365900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.365921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.370754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.370982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.371003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.376040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.376269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.376290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.381010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.381285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.381305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.385908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.386153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.386174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.390877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.391104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.125 [2024-10-08 18:35:41.391125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.125 [2024-10-08 18:35:41.395411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.125 [2024-10-08 18:35:41.395657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.126 [2024-10-08 18:35:41.395678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.126 [2024-10-08 18:35:41.400890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.126 [2024-10-08 18:35:41.401202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.126 [2024-10-08 18:35:41.401223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.126 [2024-10-08 18:35:41.406833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.126 [2024-10-08 18:35:41.407092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.126 [2024-10-08 18:35:41.407113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.126 [2024-10-08 18:35:41.412009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.126 [2024-10-08 18:35:41.412239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.126 [2024-10-08 18:35:41.412264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.126 [2024-10-08 18:35:41.417083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.126 [2024-10-08 18:35:41.417350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.126 [2024-10-08 18:35:41.417371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.126 [2024-10-08 18:35:41.422276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.126 [2024-10-08 18:35:41.422509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.126 [2024-10-08 18:35:41.422530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.126 [2024-10-08 18:35:41.427529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.126 [2024-10-08 18:35:41.427775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.126 [2024-10-08 18:35:41.427796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.126 [2024-10-08 18:35:41.432548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.126 [2024-10-08 18:35:41.432811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.126 [2024-10-08 18:35:41.432832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.126 [2024-10-08 18:35:41.437512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.126 [2024-10-08 18:35:41.437770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.126 [2024-10-08 18:35:41.437790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.126 [2024-10-08 18:35:41.442548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.126 [2024-10-08 18:35:41.442796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.126 [2024-10-08 18:35:41.442817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.386 [2024-10-08 18:35:41.447461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.386 [2024-10-08 18:35:41.447714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.386 [2024-10-08 18:35:41.447734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.386 [2024-10-08 18:35:41.452353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.386 [2024-10-08 18:35:41.452588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.386 [2024-10-08 18:35:41.452609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.386 [2024-10-08 18:35:41.456805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.386 [2024-10-08 18:35:41.457042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.386 [2024-10-08 18:35:41.457063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.386 [2024-10-08 18:35:41.461881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.386 [2024-10-08 18:35:41.462133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.386 [2024-10-08 18:35:41.462153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.386 [2024-10-08 18:35:41.466178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.386 [2024-10-08 18:35:41.466427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.386 [2024-10-08 18:35:41.466448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.386 [2024-10-08 18:35:41.470475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.386 [2024-10-08 18:35:41.470720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.386 [2024-10-08 18:35:41.470740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.386 [2024-10-08 18:35:41.474703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.386 [2024-10-08 18:35:41.474951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.386 [2024-10-08 18:35:41.474972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.386 [2024-10-08 18:35:41.479003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.386 [2024-10-08 18:35:41.479247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.386 [2024-10-08 18:35:41.479268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.386 [2024-10-08 18:35:41.483238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.386 [2024-10-08 18:35:41.483472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.386 [2024-10-08 18:35:41.483493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.386 [2024-10-08 18:35:41.487621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.386 [2024-10-08 18:35:41.487851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.386 [2024-10-08 18:35:41.487871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.386 [2024-10-08 18:35:41.492083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.386 [2024-10-08 18:35:41.492311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.386 [2024-10-08 18:35:41.492331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.386 [2024-10-08 18:35:41.496527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.386 [2024-10-08 18:35:41.496759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.386 [2024-10-08 18:35:41.496779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.386 [2024-10-08 18:35:41.500983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.386 [2024-10-08 18:35:41.501212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.386 [2024-10-08 18:35:41.501233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.386 [2024-10-08 18:35:41.505546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.386 [2024-10-08 18:35:41.505797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.386 [2024-10-08 18:35:41.505818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.386 [2024-10-08 18:35:41.510456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.386 [2024-10-08 18:35:41.510686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.386 [2024-10-08 18:35:41.510709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.386 [2024-10-08 18:35:41.515075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.386 [2024-10-08 18:35:41.515310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.386 [2024-10-08 18:35:41.515332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.386 [2024-10-08 18:35:41.519629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.386 [2024-10-08 18:35:41.519887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.386 [2024-10-08 18:35:41.519908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.524241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.524478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.524499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.528851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.529081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.529102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.533384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.533631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.533657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.537999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.538245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.538267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.542517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.542770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.542790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.547045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.547276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.547297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.551573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.551803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.551824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.556102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.556332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.556352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.560583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.560831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.560852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.565170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.565421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.565441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.569721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.569967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.569988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.573993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.574242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.574263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.578520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.578747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.578767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.583199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.583449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.583470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.588340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.588590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.588611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.593364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.593596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.593616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.597877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.598106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.598126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.602342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.602575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.602596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.606893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.607123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.607143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.611341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.611597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.611617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.615806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.616053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.616073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.620372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.620619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.620650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.625476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.625707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.625727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.630731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.630983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.631004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.635824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.636057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.636077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.640637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.640868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.640889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.645415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.645661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.645682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.650817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.651048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.651068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.655937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.656170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.656195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.661092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.661336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.661357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.666030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.666276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.666297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.671044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.671272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.671292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.675614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.675843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.675863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.680093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.680319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.680340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.684447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.684678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.684698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.688949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.689183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.689203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.693416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.693663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.693683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.697930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.698178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.698199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.702550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.387 [2024-10-08 18:35:41.702786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.387 [2024-10-08 18:35:41.702806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.387 [2024-10-08 18:35:41.706929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.647 [2024-10-08 18:35:41.707174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.647 [2024-10-08 18:35:41.707195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.647 [2024-10-08 18:35:41.711286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.711537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.711558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.715643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.715869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.715889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.719917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.720145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.720166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.724406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.724641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.724662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.729576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.729823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.729843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.735272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.735437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.735456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.740634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.740882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.740903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.747174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.747506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.747527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.753233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.753494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.753515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.757856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.758107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.758127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.762412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.762646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.762668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.766974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.767204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.767225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.771475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.771719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.771740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.776083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.776309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.776330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.780612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.780839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.780863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.785054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.785279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.785298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.790346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.790600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.790621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.795257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.795486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.795506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.799764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.799991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.800011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.804265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.804498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.804519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.808740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.808970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.808990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.813323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.813574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.813594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.817885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.818127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.818147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.822499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.822734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.822754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.826957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.827183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.827203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.831270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.831499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.831520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.835794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.836019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.836040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.840701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.840945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.840965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.846007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.846234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.846254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.850922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.851148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.851168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.855484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.855726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.855746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.859979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.860224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.860244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.865407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.865723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.865744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.872032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.872350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.872370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.878253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.878509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.878529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.884136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.884362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.884388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.889024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.889267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.889287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.893920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.894153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.894174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.898644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.898872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.898893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.903171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.903405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.903425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.908060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.908286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.908310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.913039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.913283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.913304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.918289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.918523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.648 [2024-10-08 18:35:41.918544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.648 [2024-10-08 18:35:41.922843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.648 [2024-10-08 18:35:41.923068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.649 [2024-10-08 18:35:41.923088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.649 [2024-10-08 18:35:41.927555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.649 [2024-10-08 18:35:41.927784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.649 [2024-10-08 18:35:41.927805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.649 [2024-10-08 18:35:41.933218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.649 [2024-10-08 18:35:41.933521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.649 [2024-10-08 18:35:41.933541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.649 [2024-10-08 18:35:41.939602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.649 [2024-10-08 18:35:41.939847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.649 [2024-10-08 18:35:41.939867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.649 [2024-10-08 18:35:41.945618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.649 [2024-10-08 18:35:41.945912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.649 [2024-10-08 18:35:41.945932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.649 [2024-10-08 18:35:41.953184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.649 [2024-10-08 18:35:41.953422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.649 [2024-10-08 18:35:41.953444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.649 [2024-10-08 18:35:41.959132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.649 [2024-10-08 18:35:41.959360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.649 [2024-10-08 18:35:41.959385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.649 [2024-10-08 18:35:41.964814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.649 [2024-10-08 18:35:41.965047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.649 [2024-10-08 18:35:41.965068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.909 [2024-10-08 18:35:41.970510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.909 [2024-10-08 18:35:41.970796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.909 [2024-10-08 18:35:41.970816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.909 [2024-10-08 18:35:41.976475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.909 [2024-10-08 18:35:41.976721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.909 [2024-10-08 18:35:41.976742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.909 [2024-10-08 18:35:41.982138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.909 [2024-10-08 18:35:41.982371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.909 [2024-10-08 18:35:41.982398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.909 [2024-10-08 18:35:41.988014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.909 [2024-10-08 18:35:41.988293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.909 [2024-10-08 18:35:41.988313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.909 [2024-10-08 18:35:41.993855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.909 [2024-10-08 18:35:41.994110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.909 [2024-10-08 18:35:41.994130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.909 [2024-10-08 18:35:41.999587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.909 [2024-10-08 18:35:41.999813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.909 [2024-10-08 18:35:41.999834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.909 [2024-10-08 18:35:42.004586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.909 [2024-10-08 18:35:42.004832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.909 [2024-10-08 18:35:42.004853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.909 [2024-10-08 18:35:42.010056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.909 [2024-10-08 18:35:42.010310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.909 [2024-10-08 18:35:42.010335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.909 [2024-10-08 18:35:42.016391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.909 [2024-10-08 18:35:42.016695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.909 [2024-10-08 18:35:42.016718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.909 [2024-10-08 18:35:42.023605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.909 [2024-10-08 18:35:42.023878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.909 [2024-10-08 18:35:42.023900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.909 [2024-10-08 18:35:42.030258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.909 [2024-10-08 18:35:42.030544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.909 [2024-10-08 18:35:42.030565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.909 [2024-10-08 18:35:42.037314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.909 [2024-10-08 18:35:42.037584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.909 [2024-10-08 18:35:42.037605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.909 [2024-10-08 18:35:42.043015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.909 [2024-10-08 18:35:42.043242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.909 [2024-10-08 18:35:42.043262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.909 [2024-10-08 18:35:42.047972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.909 [2024-10-08 18:35:42.048203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.909 [2024-10-08 18:35:42.048223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.909 [2024-10-08 18:35:42.052654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.909 [2024-10-08 18:35:42.052878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.909 [2024-10-08 18:35:42.052899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.909 [2024-10-08 18:35:42.057173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.057406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.057430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.061696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.061939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.061960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.065992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.066217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.066238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.070525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.070753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.070773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.075033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.075277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.075298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.079782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.080024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.080044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.084243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.084496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.084516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.089572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.089855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.089875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.095888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.096190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.096211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.102055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.102302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.102323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.107916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.108161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.108183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.113074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.113303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.113324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.118086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.118313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.118333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.123253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.123504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.123524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.127921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.128165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.128186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.132555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.132808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.132827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.136945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.137171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.137191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.141202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.141435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.141460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.145680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.145909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.145930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.150224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.150458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.150479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.154721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.154946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.154966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.159054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.159288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.159309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.163292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.163541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.163562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.167537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.167771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.167792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.171787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.172016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.172036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.176027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.176273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.176294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.180476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.180714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.180735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.185195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.185453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.185473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.190272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.190521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.190542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.195538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.195788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.195808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.200139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.200366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.200393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.204692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.204947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.204968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.209176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.209413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.209434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.213833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.214059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.214080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.218146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.218374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.218401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.222717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.222946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.222966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.910 [2024-10-08 18:35:42.227767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:48.910 [2024-10-08 18:35:42.228011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.910 [2024-10-08 18:35:42.228032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.169 [2024-10-08 18:35:42.233037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:49.169 [2024-10-08 18:35:42.233285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.169 [2024-10-08 18:35:42.233305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.169 [2024-10-08 18:35:42.237952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:49.169 [2024-10-08 18:35:42.238198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.169 [2024-10-08 18:35:42.238219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.169 6114.00 IOPS, 764.25 MiB/s [2024-10-08T16:35:42.492Z] [2024-10-08 18:35:42.244057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff900) with pdu=0x2000198fef90 00:27:49.169 [2024-10-08 18:35:42.244126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.169 [2024-10-08 18:35:42.244145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.169 00:27:49.169 Latency(us) 00:27:49.169 [2024-10-08T16:35:42.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.169 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:49.169 nvme0n1 : 2.00 6110.55 763.82 0.00 0.00 2614.03 1997.29 7739.49 00:27:49.169 [2024-10-08T16:35:42.492Z] =================================================================================================================== 00:27:49.169 [2024-10-08T16:35:42.492Z] Total : 6110.55 763.82 0.00 0.00 2614.03 1997.29 7739.49 00:27:49.169 { 00:27:49.169 "results": [ 00:27:49.169 { 00:27:49.169 "job": "nvme0n1", 00:27:49.169 "core_mask": "0x2", 00:27:49.169 "workload": "randwrite", 00:27:49.169 "status": "finished", 00:27:49.169 "queue_depth": 16, 00:27:49.169 "io_size": 131072, 00:27:49.169 "runtime": 2.004237, 00:27:49.169 "iops": 6110.554789678067, 00:27:49.169 "mibps": 763.8193487097584, 00:27:49.169 "io_failed": 0, 00:27:49.169 "io_timeout": 0, 00:27:49.169 "avg_latency_us": 2614.0305156948057, 00:27:49.169 "min_latency_us": 1997.287619047619, 00:27:49.169 "max_latency_us": 7739.489523809524 00:27:49.169 } 00:27:49.169 ], 00:27:49.169 "core_count": 1 00:27:49.169 } 00:27:49.169 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:49.169 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:49.169 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:49.169 | .driver_specific 00:27:49.169 | .nvme_error 00:27:49.169 | .status_code 00:27:49.169 | .command_transient_transport_error' 00:27:49.169 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:49.169 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 395 > 0 )) 00:27:49.169 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 572350 00:27:49.169 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 572350 ']' 00:27:49.169 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 572350 00:27:49.169 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:49.169 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:49.169 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 572350 00:27:49.427 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:49.427 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:49.427 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 572350' 00:27:49.427 killing process with pid 572350 00:27:49.427 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 572350 00:27:49.427 Received shutdown signal, test time was about 2.000000 seconds 00:27:49.427 00:27:49.427 Latency(us) 00:27:49.427 [2024-10-08T16:35:42.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.427 [2024-10-08T16:35:42.750Z] =================================================================================================================== 00:27:49.427 [2024-10-08T16:35:42.750Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:49.427 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 572350 00:27:49.427 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 570224 00:27:49.427 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 570224 ']' 00:27:49.427 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 570224 00:27:49.427 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:49.427 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:49.427 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 570224 00:27:49.686 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:49.686 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:49.686 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 570224' 00:27:49.686 killing process with pid 570224 00:27:49.686 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 570224 00:27:49.686 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 570224 00:27:49.686 00:27:49.686 real 0m17.154s 00:27:49.686 user 0m33.019s 00:27:49.686 sys 0m4.743s 00:27:49.686 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:49.686 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:49.686 ************************************ 00:27:49.686 END TEST nvmf_digest_error 00:27:49.686 ************************************ 00:27:49.686 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:49.686 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:49.686 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:49.686 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:49.686 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:49.686 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:49.686 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:49.686 18:35:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:49.686 rmmod nvme_tcp 00:27:49.686 rmmod nvme_fabrics 00:27:49.945 rmmod nvme_keyring 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 570224 ']' 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 570224 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 570224 ']' 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 570224 00:27:49.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (570224) - No such process 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 570224 is not found' 00:27:49.945 Process with pid 570224 is not found 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.945 18:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.849 18:35:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:51.849 00:27:51.849 real 0m42.969s 00:27:51.849 user 1m8.239s 00:27:51.849 sys 0m14.194s 00:27:51.849 18:35:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:51.849 18:35:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:51.849 ************************************ 00:27:51.849 END TEST nvmf_digest 00:27:51.849 ************************************ 00:27:51.849 18:35:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:51.849 18:35:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:51.849 18:35:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:51.849 18:35:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:51.849 18:35:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:51.849 18:35:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:51.849 18:35:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.109 ************************************ 00:27:52.109 START TEST nvmf_bdevperf 00:27:52.109 ************************************ 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:52.109 * Looking for test storage... 00:27:52.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:52.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.109 --rc genhtml_branch_coverage=1 00:27:52.109 --rc genhtml_function_coverage=1 00:27:52.109 --rc genhtml_legend=1 00:27:52.109 --rc geninfo_all_blocks=1 00:27:52.109 --rc geninfo_unexecuted_blocks=1 00:27:52.109 00:27:52.109 ' 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:52.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.109 --rc genhtml_branch_coverage=1 00:27:52.109 --rc genhtml_function_coverage=1 00:27:52.109 --rc genhtml_legend=1 00:27:52.109 --rc geninfo_all_blocks=1 00:27:52.109 --rc geninfo_unexecuted_blocks=1 00:27:52.109 00:27:52.109 ' 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:52.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.109 --rc genhtml_branch_coverage=1 00:27:52.109 --rc genhtml_function_coverage=1 00:27:52.109 --rc genhtml_legend=1 00:27:52.109 --rc geninfo_all_blocks=1 00:27:52.109 --rc geninfo_unexecuted_blocks=1 00:27:52.109 00:27:52.109 ' 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:52.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.109 --rc genhtml_branch_coverage=1 00:27:52.109 --rc genhtml_function_coverage=1 00:27:52.109 --rc genhtml_legend=1 00:27:52.109 --rc geninfo_all_blocks=1 00:27:52.109 --rc geninfo_unexecuted_blocks=1 00:27:52.109 00:27:52.109 ' 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.109 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:52.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:52.110 18:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.676 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.676 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:58.676 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:58.676 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:58.676 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:58.676 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:58.676 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:58.676 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:58.676 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:58.677 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:58.677 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:58.677 Found net devices under 0000:86:00.0: cvl_0_0 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:58.677 Found net devices under 0000:86:00.1: cvl_0_1 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:58.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:27:58.677 00:27:58.677 --- 10.0.0.2 ping statistics --- 00:27:58.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.677 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:27:58.677 00:27:58.677 --- 10.0.0.1 ping statistics --- 00:27:58.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.677 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=576576 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 576576 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 576576 ']' 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:58.677 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.678 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:58.678 18:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.678 [2024-10-08 18:35:51.398691] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:27:58.678 [2024-10-08 18:35:51.398739] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.678 [2024-10-08 18:35:51.470152] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:58.678 [2024-10-08 18:35:51.549308] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.678 [2024-10-08 18:35:51.549345] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.678 [2024-10-08 18:35:51.549352] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.678 [2024-10-08 18:35:51.549359] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.678 [2024-10-08 18:35:51.549364] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.678 [2024-10-08 18:35:51.550332] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.678 [2024-10-08 18:35:51.550439] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.678 [2024-10-08 18:35:51.550440] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.937 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:58.937 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:58.937 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:58.937 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:58.937 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:59.195 [2024-10-08 18:35:52.276766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:59.195 Malloc0 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:59.195 [2024-10-08 18:35:52.345219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:59.195 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:59.195 { 00:27:59.195 "params": { 00:27:59.195 "name": "Nvme$subsystem", 00:27:59.195 "trtype": "$TEST_TRANSPORT", 00:27:59.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.195 "adrfam": "ipv4", 00:27:59.195 "trsvcid": "$NVMF_PORT", 00:27:59.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.195 "hdgst": ${hdgst:-false}, 00:27:59.196 "ddgst": ${ddgst:-false} 00:27:59.196 }, 00:27:59.196 "method": "bdev_nvme_attach_controller" 00:27:59.196 } 00:27:59.196 EOF 00:27:59.196 )") 00:27:59.196 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:27:59.196 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:27:59.196 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:27:59.196 18:35:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:27:59.196 "params": { 00:27:59.196 "name": "Nvme1", 00:27:59.196 "trtype": "tcp", 00:27:59.196 "traddr": "10.0.0.2", 00:27:59.196 "adrfam": "ipv4", 00:27:59.196 "trsvcid": "4420", 00:27:59.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:59.196 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:59.196 "hdgst": false, 00:27:59.196 "ddgst": false 00:27:59.196 }, 00:27:59.196 "method": "bdev_nvme_attach_controller" 00:27:59.196 }' 00:27:59.196 [2024-10-08 18:35:52.396647] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:27:59.196 [2024-10-08 18:35:52.396697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid576647 ] 00:27:59.196 [2024-10-08 18:35:52.463572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.453 [2024-10-08 18:35:52.536139] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.453 Running I/O for 1 seconds... 00:28:00.829 11447.00 IOPS, 44.71 MiB/s 00:28:00.829 Latency(us) 00:28:00.829 [2024-10-08T16:35:54.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.829 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:00.829 Verification LBA range: start 0x0 length 0x4000 00:28:00.829 Nvme1n1 : 1.01 11437.97 44.68 0.00 0.00 11150.77 2371.78 13107.20 00:28:00.829 [2024-10-08T16:35:54.152Z] =================================================================================================================== 00:28:00.829 [2024-10-08T16:35:54.152Z] Total : 11437.97 44.68 0.00 0.00 11150.77 2371.78 13107.20 00:28:00.829 18:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=576970 00:28:00.829 18:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:00.829 18:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:00.829 18:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:00.829 18:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:00.829 18:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:00.829 18:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:00.829 18:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:00.829 { 00:28:00.829 "params": { 00:28:00.829 "name": "Nvme$subsystem", 00:28:00.829 "trtype": "$TEST_TRANSPORT", 00:28:00.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.829 "adrfam": "ipv4", 00:28:00.829 "trsvcid": "$NVMF_PORT", 00:28:00.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.829 "hdgst": ${hdgst:-false}, 00:28:00.829 "ddgst": ${ddgst:-false} 00:28:00.829 }, 00:28:00.829 "method": "bdev_nvme_attach_controller" 00:28:00.829 } 00:28:00.829 EOF 00:28:00.829 )") 00:28:00.829 18:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:00.829 18:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:00.829 18:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:00.829 18:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:00.829 "params": { 00:28:00.829 "name": "Nvme1", 00:28:00.829 "trtype": "tcp", 00:28:00.829 "traddr": "10.0.0.2", 00:28:00.829 "adrfam": "ipv4", 00:28:00.829 "trsvcid": "4420", 00:28:00.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:00.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:00.829 "hdgst": false, 00:28:00.829 "ddgst": false 00:28:00.829 }, 00:28:00.829 "method": "bdev_nvme_attach_controller" 00:28:00.829 }' 00:28:00.829 [2024-10-08 18:35:53.983531] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:28:00.829 [2024-10-08 18:35:53.983580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid576970 ] 00:28:00.829 [2024-10-08 18:35:54.049828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.829 [2024-10-08 18:35:54.122337] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.088 Running I/O for 15 seconds... 00:28:03.402 11091.00 IOPS, 43.32 MiB/s [2024-10-08T16:35:56.986Z] 11274.00 IOPS, 44.04 MiB/s [2024-10-08T16:35:56.986Z] 18:35:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 576576 00:28:03.663 18:35:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:03.663 [2024-10-08 18:35:56.952817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.663 [2024-10-08 18:35:56.952853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.952870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.663 [2024-10-08 18:35:56.952880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.952891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.663 [2024-10-08 18:35:56.952899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.952908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.663 [2024-10-08 18:35:56.952916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.952925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.663 [2024-10-08 18:35:56.952933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.952942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.663 [2024-10-08 18:35:56.952950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.952960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.663 [2024-10-08 18:35:56.952968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.952978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.663 [2024-10-08 18:35:56.952986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.953000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.663 [2024-10-08 18:35:56.953007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.953016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.663 [2024-10-08 18:35:56.953023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.953031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.663 [2024-10-08 18:35:56.953039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.953049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.663 [2024-10-08 18:35:56.953058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.953067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.663 [2024-10-08 18:35:56.953076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.953085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.663 [2024-10-08 18:35:56.953091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.953107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.663 [2024-10-08 18:35:56.953114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.953122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.663 [2024-10-08 18:35:56.953129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.953137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.663 [2024-10-08 18:35:56.953146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.953157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.663 [2024-10-08 18:35:56.953167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.953177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.663 [2024-10-08 18:35:56.953185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.953195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.663 [2024-10-08 18:35:56.953203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.953213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.663 [2024-10-08 18:35:56.953225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.663 [2024-10-08 18:35:56.953234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.663 [2024-10-08 18:35:56.953242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.664 [2024-10-08 18:35:56.953854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.664 [2024-10-08 18:35:56.953861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.953869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.953876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.953884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.953890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.953899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.953905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.953913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.953920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.953928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.953934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.953942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.953948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.953956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.953963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.953971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.953979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.953987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.953994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.665 [2024-10-08 18:35:56.954583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.665 [2024-10-08 18:35:56.954589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.666 [2024-10-08 18:35:56.954811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.666 [2024-10-08 18:35:56.954914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2062590 is same with the state(6) to be set 00:28:03.666 [2024-10-08 18:35:56.954930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:03.666 [2024-10-08 18:35:56.954936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:03.666 [2024-10-08 18:35:56.954942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106096 len:8 PRP1 0x0 PRP2 0x0 00:28:03.666 [2024-10-08 18:35:56.954949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.666 [2024-10-08 18:35:56.954991] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2062590 was disconnected and freed. reset controller. 00:28:03.666 [2024-10-08 18:35:56.957779] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.666 [2024-10-08 18:35:56.957832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.666 [2024-10-08 18:35:56.958307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-10-08 18:35:56.958323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.666 [2024-10-08 18:35:56.958331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.666 [2024-10-08 18:35:56.958509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.666 [2024-10-08 18:35:56.958683] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.666 [2024-10-08 18:35:56.958691] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.666 [2024-10-08 18:35:56.958699] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.666 [2024-10-08 18:35:56.961441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.666 [2024-10-08 18:35:56.970922] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.666 [2024-10-08 18:35:56.971357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-10-08 18:35:56.971382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.666 [2024-10-08 18:35:56.971391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.666 [2024-10-08 18:35:56.971563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.666 [2024-10-08 18:35:56.971743] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.666 [2024-10-08 18:35:56.971751] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.666 [2024-10-08 18:35:56.971758] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.666 [2024-10-08 18:35:56.974351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.926 [2024-10-08 18:35:56.983890] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.926 [2024-10-08 18:35:56.984243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.926 [2024-10-08 18:35:56.984259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.926 [2024-10-08 18:35:56.984267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.926 [2024-10-08 18:35:56.984446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.926 [2024-10-08 18:35:56.984619] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.926 [2024-10-08 18:35:56.984628] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.926 [2024-10-08 18:35:56.984635] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.926 [2024-10-08 18:35:56.987322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.926 [2024-10-08 18:35:56.996673] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.926 [2024-10-08 18:35:56.997115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.926 [2024-10-08 18:35:56.997168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.926 [2024-10-08 18:35:56.997192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.926 [2024-10-08 18:35:56.997742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.926 [2024-10-08 18:35:56.997918] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.926 [2024-10-08 18:35:56.997927] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.926 [2024-10-08 18:35:56.997934] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.926 [2024-10-08 18:35:57.000534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.926 [2024-10-08 18:35:57.009422] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.926 [2024-10-08 18:35:57.009848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.926 [2024-10-08 18:35:57.009863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.926 [2024-10-08 18:35:57.009873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.926 [2024-10-08 18:35:57.010477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.926 [2024-10-08 18:35:57.010645] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.926 [2024-10-08 18:35:57.010652] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.926 [2024-10-08 18:35:57.010658] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.926 [2024-10-08 18:35:57.013338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.926 [2024-10-08 18:35:57.022498] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.926 [2024-10-08 18:35:57.022938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.926 [2024-10-08 18:35:57.022954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.926 [2024-10-08 18:35:57.022962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.926 [2024-10-08 18:35:57.023134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.926 [2024-10-08 18:35:57.023305] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.926 [2024-10-08 18:35:57.023314] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.926 [2024-10-08 18:35:57.023321] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.926 [2024-10-08 18:35:57.026053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.926 [2024-10-08 18:35:57.035492] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.926 [2024-10-08 18:35:57.035951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.926 [2024-10-08 18:35:57.035994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.926 [2024-10-08 18:35:57.036017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.926 [2024-10-08 18:35:57.036609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.926 [2024-10-08 18:35:57.036803] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.926 [2024-10-08 18:35:57.036811] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.926 [2024-10-08 18:35:57.036818] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.926 [2024-10-08 18:35:57.039488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.926 [2024-10-08 18:35:57.048252] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.926 [2024-10-08 18:35:57.048692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.926 [2024-10-08 18:35:57.048707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.926 [2024-10-08 18:35:57.048715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.926 [2024-10-08 18:35:57.048881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.926 [2024-10-08 18:35:57.049047] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.926 [2024-10-08 18:35:57.049058] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.926 [2024-10-08 18:35:57.049064] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.926 [2024-10-08 18:35:57.051674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.926 [2024-10-08 18:35:57.061260] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.926 [2024-10-08 18:35:57.061618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.926 [2024-10-08 18:35:57.061635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.926 [2024-10-08 18:35:57.061642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.926 [2024-10-08 18:35:57.061814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.926 [2024-10-08 18:35:57.062008] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.926 [2024-10-08 18:35:57.062017] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.926 [2024-10-08 18:35:57.062024] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.926 [2024-10-08 18:35:57.064732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.926 [2024-10-08 18:35:57.074056] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.926 [2024-10-08 18:35:57.074468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.926 [2024-10-08 18:35:57.074485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.926 [2024-10-08 18:35:57.074492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.926 [2024-10-08 18:35:57.074649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.926 [2024-10-08 18:35:57.074807] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.927 [2024-10-08 18:35:57.074815] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.927 [2024-10-08 18:35:57.074821] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.927 [2024-10-08 18:35:57.077427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.927 [2024-10-08 18:35:57.086874] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.927 [2024-10-08 18:35:57.087222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.927 [2024-10-08 18:35:57.087238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.927 [2024-10-08 18:35:57.087245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.927 [2024-10-08 18:35:57.087426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.927 [2024-10-08 18:35:57.087593] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.927 [2024-10-08 18:35:57.087601] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.927 [2024-10-08 18:35:57.087607] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.927 [2024-10-08 18:35:57.090201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.927 [2024-10-08 18:35:57.099680] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.927 [2024-10-08 18:35:57.100102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.927 [2024-10-08 18:35:57.100145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.927 [2024-10-08 18:35:57.100169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.927 [2024-10-08 18:35:57.100726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.927 [2024-10-08 18:35:57.100894] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.927 [2024-10-08 18:35:57.100902] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.927 [2024-10-08 18:35:57.100909] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.927 [2024-10-08 18:35:57.103587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.927 [2024-10-08 18:35:57.112604] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.927 [2024-10-08 18:35:57.113048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.927 [2024-10-08 18:35:57.113092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.927 [2024-10-08 18:35:57.113116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.927 [2024-10-08 18:35:57.113706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.927 [2024-10-08 18:35:57.113874] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.927 [2024-10-08 18:35:57.113882] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.927 [2024-10-08 18:35:57.113889] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.927 [2024-10-08 18:35:57.116558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.927 [2024-10-08 18:35:57.125498] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.927 [2024-10-08 18:35:57.125940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.927 [2024-10-08 18:35:57.125956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.927 [2024-10-08 18:35:57.125963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.927 [2024-10-08 18:35:57.126130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.927 [2024-10-08 18:35:57.126296] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.927 [2024-10-08 18:35:57.126305] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.927 [2024-10-08 18:35:57.126311] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.927 [2024-10-08 18:35:57.128989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.927 [2024-10-08 18:35:57.138451] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.927 [2024-10-08 18:35:57.138893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.927 [2024-10-08 18:35:57.138937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.927 [2024-10-08 18:35:57.138961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.927 [2024-10-08 18:35:57.139559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.927 [2024-10-08 18:35:57.140066] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.927 [2024-10-08 18:35:57.140074] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.927 [2024-10-08 18:35:57.140080] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.927 [2024-10-08 18:35:57.142676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.927 [2024-10-08 18:35:57.151217] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.927 [2024-10-08 18:35:57.151689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.927 [2024-10-08 18:35:57.151705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.927 [2024-10-08 18:35:57.151712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.927 [2024-10-08 18:35:57.151878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.927 [2024-10-08 18:35:57.152045] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.927 [2024-10-08 18:35:57.152053] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.927 [2024-10-08 18:35:57.152060] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.927 [2024-10-08 18:35:57.154656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.927 [2024-10-08 18:35:57.164128] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.927 [2024-10-08 18:35:57.164499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.927 [2024-10-08 18:35:57.164548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.927 [2024-10-08 18:35:57.164572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.927 [2024-10-08 18:35:57.165104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.927 [2024-10-08 18:35:57.165262] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.927 [2024-10-08 18:35:57.165270] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.927 [2024-10-08 18:35:57.165276] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.927 [2024-10-08 18:35:57.167956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.927 [2024-10-08 18:35:57.177021] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.927 [2024-10-08 18:35:57.177442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.927 [2024-10-08 18:35:57.177458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.927 [2024-10-08 18:35:57.177465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.927 [2024-10-08 18:35:57.177636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.927 [2024-10-08 18:35:57.177794] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.927 [2024-10-08 18:35:57.177801] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.927 [2024-10-08 18:35:57.177811] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.927 [2024-10-08 18:35:57.180420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.927 [2024-10-08 18:35:57.189851] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.927 [2024-10-08 18:35:57.190282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.927 [2024-10-08 18:35:57.190297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.927 [2024-10-08 18:35:57.190304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.927 [2024-10-08 18:35:57.190487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.927 [2024-10-08 18:35:57.190654] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.927 [2024-10-08 18:35:57.190662] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.927 [2024-10-08 18:35:57.190668] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.927 [2024-10-08 18:35:57.193259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.927 [2024-10-08 18:35:57.202603] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.927 [2024-10-08 18:35:57.203051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.927 [2024-10-08 18:35:57.203095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.927 [2024-10-08 18:35:57.203118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.927 [2024-10-08 18:35:57.203711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.927 [2024-10-08 18:35:57.204197] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.927 [2024-10-08 18:35:57.204206] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.927 [2024-10-08 18:35:57.204212] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.927 [2024-10-08 18:35:57.206972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.927 [2024-10-08 18:35:57.215649] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.927 [2024-10-08 18:35:57.216095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.927 [2024-10-08 18:35:57.216110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.927 [2024-10-08 18:35:57.216118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.928 [2024-10-08 18:35:57.216284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.928 [2024-10-08 18:35:57.216455] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.928 [2024-10-08 18:35:57.216463] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.928 [2024-10-08 18:35:57.216470] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.928 [2024-10-08 18:35:57.219122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.928 [2024-10-08 18:35:57.228431] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.928 [2024-10-08 18:35:57.228877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.928 [2024-10-08 18:35:57.228892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.928 [2024-10-08 18:35:57.228900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.928 [2024-10-08 18:35:57.229065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.928 [2024-10-08 18:35:57.229231] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.928 [2024-10-08 18:35:57.229239] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.928 [2024-10-08 18:35:57.229245] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.928 [2024-10-08 18:35:57.231905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.928 [2024-10-08 18:35:57.241132] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.928 [2024-10-08 18:35:57.241523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.928 [2024-10-08 18:35:57.241539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:03.928 [2024-10-08 18:35:57.241546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:03.928 [2024-10-08 18:35:57.241704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:03.928 [2024-10-08 18:35:57.241861] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.928 [2024-10-08 18:35:57.241869] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.928 [2024-10-08 18:35:57.241875] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.928 [2024-10-08 18:35:57.244584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.187 [2024-10-08 18:35:57.254114] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.187 [2024-10-08 18:35:57.254534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.187 [2024-10-08 18:35:57.254549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.187 [2024-10-08 18:35:57.254557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.187 [2024-10-08 18:35:57.254723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.187 [2024-10-08 18:35:57.254890] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.187 [2024-10-08 18:35:57.254897] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.187 [2024-10-08 18:35:57.254904] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.187 [2024-10-08 18:35:57.257552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.187 [2024-10-08 18:35:57.266938] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.187 [2024-10-08 18:35:57.267398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.187 [2024-10-08 18:35:57.267443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.187 [2024-10-08 18:35:57.267466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.187 [2024-10-08 18:35:57.267999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.187 [2024-10-08 18:35:57.268393] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.187 [2024-10-08 18:35:57.268412] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.187 [2024-10-08 18:35:57.268426] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.187 [2024-10-08 18:35:57.274633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.187 [2024-10-08 18:35:57.281807] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.187 [2024-10-08 18:35:57.282309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.187 [2024-10-08 18:35:57.282351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.187 [2024-10-08 18:35:57.282388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.187 [2024-10-08 18:35:57.282969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.187 [2024-10-08 18:35:57.283458] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.187 [2024-10-08 18:35:57.283471] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.187 [2024-10-08 18:35:57.283482] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.187 [2024-10-08 18:35:57.287537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.187 [2024-10-08 18:35:57.294685] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.187 [2024-10-08 18:35:57.295113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.187 [2024-10-08 18:35:57.295129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.187 [2024-10-08 18:35:57.295136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.187 [2024-10-08 18:35:57.295303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.187 [2024-10-08 18:35:57.295476] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.187 [2024-10-08 18:35:57.295485] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.187 [2024-10-08 18:35:57.295491] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.187 [2024-10-08 18:35:57.298136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.187 [2024-10-08 18:35:57.307430] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.187 [2024-10-08 18:35:57.307846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.187 [2024-10-08 18:35:57.307861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.187 [2024-10-08 18:35:57.307868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.188 [2024-10-08 18:35:57.308026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.188 [2024-10-08 18:35:57.308183] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.188 [2024-10-08 18:35:57.308190] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.188 [2024-10-08 18:35:57.308200] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.188 [2024-10-08 18:35:57.310807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.188 [2024-10-08 18:35:57.320208] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.188 [2024-10-08 18:35:57.320575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.188 [2024-10-08 18:35:57.320619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.188 [2024-10-08 18:35:57.320643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.188 [2024-10-08 18:35:57.321192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.188 [2024-10-08 18:35:57.321360] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.188 [2024-10-08 18:35:57.321368] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.188 [2024-10-08 18:35:57.321374] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.188 [2024-10-08 18:35:57.323975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.188 9842.33 IOPS, 38.45 MiB/s [2024-10-08T16:35:57.511Z] [2024-10-08 18:35:57.333785] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.188 [2024-10-08 18:35:57.334202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.188 [2024-10-08 18:35:57.334218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.188 [2024-10-08 18:35:57.334225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.188 [2024-10-08 18:35:57.334387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.188 [2024-10-08 18:35:57.334568] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.188 [2024-10-08 18:35:57.334576] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.188 [2024-10-08 18:35:57.334583] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.188 [2024-10-08 18:35:57.337227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.188 [2024-10-08 18:35:57.346576] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.188 [2024-10-08 18:35:57.346965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.188 [2024-10-08 18:35:57.346980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.188 [2024-10-08 18:35:57.346987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.188 [2024-10-08 18:35:57.347144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.188 [2024-10-08 18:35:57.347301] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.188 [2024-10-08 18:35:57.347309] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.188 [2024-10-08 18:35:57.347314] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.188 [2024-10-08 18:35:57.349924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.188 [2024-10-08 18:35:57.359304] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.188 [2024-10-08 18:35:57.359722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.188 [2024-10-08 18:35:57.359741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.188 [2024-10-08 18:35:57.359748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.188 [2024-10-08 18:35:57.359914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.188 [2024-10-08 18:35:57.360081] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.188 [2024-10-08 18:35:57.360089] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.188 [2024-10-08 18:35:57.360095] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.188 [2024-10-08 18:35:57.362704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.188 [2024-10-08 18:35:57.372042] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.188 [2024-10-08 18:35:57.372443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.188 [2024-10-08 18:35:57.372489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.188 [2024-10-08 18:35:57.372513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.188 [2024-10-08 18:35:57.373089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.188 [2024-10-08 18:35:57.373684] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.188 [2024-10-08 18:35:57.373711] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.188 [2024-10-08 18:35:57.373732] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.188 [2024-10-08 18:35:57.376399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.188 [2024-10-08 18:35:57.384775] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.188 [2024-10-08 18:35:57.385140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.188 [2024-10-08 18:35:57.385155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.188 [2024-10-08 18:35:57.385162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.188 [2024-10-08 18:35:57.385319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.188 [2024-10-08 18:35:57.385502] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.188 [2024-10-08 18:35:57.385510] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.188 [2024-10-08 18:35:57.385517] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.188 [2024-10-08 18:35:57.388115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.188 [2024-10-08 18:35:57.397490] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.188 [2024-10-08 18:35:57.397886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.188 [2024-10-08 18:35:57.397901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.188 [2024-10-08 18:35:57.397908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.188 [2024-10-08 18:35:57.398066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.188 [2024-10-08 18:35:57.398226] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.188 [2024-10-08 18:35:57.398234] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.188 [2024-10-08 18:35:57.398240] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.188 [2024-10-08 18:35:57.400940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.189 [2024-10-08 18:35:57.410373] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.189 [2024-10-08 18:35:57.410809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.189 [2024-10-08 18:35:57.410853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.189 [2024-10-08 18:35:57.410876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.189 [2024-10-08 18:35:57.411471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.189 [2024-10-08 18:35:57.411925] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.189 [2024-10-08 18:35:57.411933] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.189 [2024-10-08 18:35:57.411939] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.189 [2024-10-08 18:35:57.414541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.189 [2024-10-08 18:35:57.423183] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.189 [2024-10-08 18:35:57.423571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.189 [2024-10-08 18:35:57.423588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.189 [2024-10-08 18:35:57.423595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.189 [2024-10-08 18:35:57.423752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.189 [2024-10-08 18:35:57.423910] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.189 [2024-10-08 18:35:57.423917] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.189 [2024-10-08 18:35:57.423923] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.189 [2024-10-08 18:35:57.426530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.189 [2024-10-08 18:35:57.436005] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.189 [2024-10-08 18:35:57.436445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.189 [2024-10-08 18:35:57.436461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.189 [2024-10-08 18:35:57.436468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.189 [2024-10-08 18:35:57.436644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.189 [2024-10-08 18:35:57.436806] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.189 [2024-10-08 18:35:57.436814] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.189 [2024-10-08 18:35:57.436820] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.189 [2024-10-08 18:35:57.439431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.189 [2024-10-08 18:35:57.448900] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.189 [2024-10-08 18:35:57.449335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.189 [2024-10-08 18:35:57.449350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.189 [2024-10-08 18:35:57.449358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.189 [2024-10-08 18:35:57.449530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.189 [2024-10-08 18:35:57.449696] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.189 [2024-10-08 18:35:57.449704] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.189 [2024-10-08 18:35:57.449710] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.189 [2024-10-08 18:35:57.452302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.189 [2024-10-08 18:35:57.461636] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.189 [2024-10-08 18:35:57.461985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.189 [2024-10-08 18:35:57.462001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.189 [2024-10-08 18:35:57.462008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.189 [2024-10-08 18:35:57.462176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.189 [2024-10-08 18:35:57.462344] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.189 [2024-10-08 18:35:57.462352] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.189 [2024-10-08 18:35:57.462358] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.189 [2024-10-08 18:35:57.465107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.189 [2024-10-08 18:35:57.474626] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.189 [2024-10-08 18:35:57.474994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.189 [2024-10-08 18:35:57.475037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.189 [2024-10-08 18:35:57.475061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.189 [2024-10-08 18:35:57.475565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.189 [2024-10-08 18:35:57.475733] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.189 [2024-10-08 18:35:57.475741] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.189 [2024-10-08 18:35:57.475747] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.189 [2024-10-08 18:35:57.478400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.189 [2024-10-08 18:35:57.487419] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.189 [2024-10-08 18:35:57.487860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.189 [2024-10-08 18:35:57.487899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.189 [2024-10-08 18:35:57.487933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.189 [2024-10-08 18:35:57.488525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.189 [2024-10-08 18:35:57.489066] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.189 [2024-10-08 18:35:57.489073] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.189 [2024-10-08 18:35:57.489080] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.189 [2024-10-08 18:35:57.491677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.189 [2024-10-08 18:35:57.500187] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.189 [2024-10-08 18:35:57.500621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.189 [2024-10-08 18:35:57.500665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.189 [2024-10-08 18:35:57.500689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.189 [2024-10-08 18:35:57.501266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.189 [2024-10-08 18:35:57.501833] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.189 [2024-10-08 18:35:57.501842] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.189 [2024-10-08 18:35:57.501848] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.189 [2024-10-08 18:35:57.504537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.450 [2024-10-08 18:35:57.512941] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.450 [2024-10-08 18:35:57.513360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.450 [2024-10-08 18:35:57.513381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.450 [2024-10-08 18:35:57.513389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.450 [2024-10-08 18:35:57.513555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.450 [2024-10-08 18:35:57.513721] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.450 [2024-10-08 18:35:57.513730] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.450 [2024-10-08 18:35:57.513736] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.450 [2024-10-08 18:35:57.516405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.450 [2024-10-08 18:35:57.525781] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.450 [2024-10-08 18:35:57.526117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.450 [2024-10-08 18:35:57.526132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.450 [2024-10-08 18:35:57.526139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.450 [2024-10-08 18:35:57.526296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.450 [2024-10-08 18:35:57.526478] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.450 [2024-10-08 18:35:57.526490] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.450 [2024-10-08 18:35:57.526497] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.450 [2024-10-08 18:35:57.529087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.450 [2024-10-08 18:35:57.538613] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.450 [2024-10-08 18:35:57.539038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.450 [2024-10-08 18:35:57.539083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.450 [2024-10-08 18:35:57.539107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.450 [2024-10-08 18:35:57.539700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.450 [2024-10-08 18:35:57.540224] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.450 [2024-10-08 18:35:57.540241] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.450 [2024-10-08 18:35:57.540255] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.450 [2024-10-08 18:35:57.546472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.450 [2024-10-08 18:35:57.553479] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.450 [2024-10-08 18:35:57.553996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.450 [2024-10-08 18:35:57.554047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.450 [2024-10-08 18:35:57.554071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.450 [2024-10-08 18:35:57.554667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.450 [2024-10-08 18:35:57.554946] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.450 [2024-10-08 18:35:57.554957] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.450 [2024-10-08 18:35:57.554967] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.450 [2024-10-08 18:35:57.559017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.450 [2024-10-08 18:35:57.566472] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.450 [2024-10-08 18:35:57.566905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.450 [2024-10-08 18:35:57.566921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.450 [2024-10-08 18:35:57.566929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.450 [2024-10-08 18:35:57.567096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.450 [2024-10-08 18:35:57.567262] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.450 [2024-10-08 18:35:57.567270] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.450 [2024-10-08 18:35:57.567276] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.450 [2024-10-08 18:35:57.569942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.450 [2024-10-08 18:35:57.579170] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.450 [2024-10-08 18:35:57.579617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.450 [2024-10-08 18:35:57.579660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.450 [2024-10-08 18:35:57.579684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.450 [2024-10-08 18:35:57.580141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.450 [2024-10-08 18:35:57.580307] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.450 [2024-10-08 18:35:57.580315] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.450 [2024-10-08 18:35:57.580322] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.450 [2024-10-08 18:35:57.582919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.450 [2024-10-08 18:35:57.592010] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.450 [2024-10-08 18:35:57.592442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.450 [2024-10-08 18:35:57.592458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.450 [2024-10-08 18:35:57.592465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.450 [2024-10-08 18:35:57.592631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.450 [2024-10-08 18:35:57.592797] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.450 [2024-10-08 18:35:57.592805] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.450 [2024-10-08 18:35:57.592811] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.450 [2024-10-08 18:35:57.595435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.450 [2024-10-08 18:35:57.604781] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.450 [2024-10-08 18:35:57.605187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.450 [2024-10-08 18:35:57.605232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.450 [2024-10-08 18:35:57.605255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.450 [2024-10-08 18:35:57.605848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.451 [2024-10-08 18:35:57.606289] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.451 [2024-10-08 18:35:57.606297] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.451 [2024-10-08 18:35:57.606303] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.451 [2024-10-08 18:35:57.608930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.451 [2024-10-08 18:35:57.617632] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.451 [2024-10-08 18:35:57.618058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.451 [2024-10-08 18:35:57.618102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.451 [2024-10-08 18:35:57.618125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.451 [2024-10-08 18:35:57.618727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.451 [2024-10-08 18:35:57.619245] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.451 [2024-10-08 18:35:57.619253] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.451 [2024-10-08 18:35:57.619259] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.451 [2024-10-08 18:35:57.621927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.451 [2024-10-08 18:35:57.630337] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.451 [2024-10-08 18:35:57.630758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.451 [2024-10-08 18:35:57.630802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.451 [2024-10-08 18:35:57.630826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.451 [2024-10-08 18:35:57.631340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.451 [2024-10-08 18:35:57.631736] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.451 [2024-10-08 18:35:57.631754] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.451 [2024-10-08 18:35:57.631769] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.451 [2024-10-08 18:35:57.638340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.451 [2024-10-08 18:35:57.645103] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.451 [2024-10-08 18:35:57.645604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.451 [2024-10-08 18:35:57.645627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.451 [2024-10-08 18:35:57.645638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.451 [2024-10-08 18:35:57.645891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.451 [2024-10-08 18:35:57.646144] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.451 [2024-10-08 18:35:57.646156] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.451 [2024-10-08 18:35:57.646165] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.451 [2024-10-08 18:35:57.650214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.451 [2024-10-08 18:35:57.658158] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.451 [2024-10-08 18:35:57.658590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.451 [2024-10-08 18:35:57.658607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.451 [2024-10-08 18:35:57.658615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.451 [2024-10-08 18:35:57.658787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.451 [2024-10-08 18:35:57.658958] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.451 [2024-10-08 18:35:57.658966] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.451 [2024-10-08 18:35:57.658976] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.451 [2024-10-08 18:35:57.661711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.451 [2024-10-08 18:35:57.670961] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.451 [2024-10-08 18:35:57.671380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.451 [2024-10-08 18:35:57.671396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.451 [2024-10-08 18:35:57.671419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.451 [2024-10-08 18:35:57.671587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.451 [2024-10-08 18:35:57.671753] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.451 [2024-10-08 18:35:57.671761] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.451 [2024-10-08 18:35:57.671768] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.451 [2024-10-08 18:35:57.674392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.451 [2024-10-08 18:35:57.683810] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.451 [2024-10-08 18:35:57.684171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.451 [2024-10-08 18:35:57.684187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.451 [2024-10-08 18:35:57.684194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.451 [2024-10-08 18:35:57.684360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.451 [2024-10-08 18:35:57.684552] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.451 [2024-10-08 18:35:57.684561] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.451 [2024-10-08 18:35:57.684567] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.451 [2024-10-08 18:35:57.687195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.451 [2024-10-08 18:35:57.696671] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.451 [2024-10-08 18:35:57.696961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.451 [2024-10-08 18:35:57.696977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.451 [2024-10-08 18:35:57.696984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.451 [2024-10-08 18:35:57.697150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.451 [2024-10-08 18:35:57.697317] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.451 [2024-10-08 18:35:57.697327] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.451 [2024-10-08 18:35:57.697334] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.451 [2024-10-08 18:35:57.699932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.451 [2024-10-08 18:35:57.709519] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.451 [2024-10-08 18:35:57.709933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.451 [2024-10-08 18:35:57.709975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.451 [2024-10-08 18:35:57.709999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.451 [2024-10-08 18:35:57.710508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.451 [2024-10-08 18:35:57.710675] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.451 [2024-10-08 18:35:57.710683] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.451 [2024-10-08 18:35:57.710690] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.451 [2024-10-08 18:35:57.713348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.451 [2024-10-08 18:35:57.722552] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.451 [2024-10-08 18:35:57.722912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.451 [2024-10-08 18:35:57.722928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.451 [2024-10-08 18:35:57.722935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.451 [2024-10-08 18:35:57.723106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.451 [2024-10-08 18:35:57.723276] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.451 [2024-10-08 18:35:57.723285] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.451 [2024-10-08 18:35:57.723291] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.451 [2024-10-08 18:35:57.726027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.451 [2024-10-08 18:35:57.735434] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.451 [2024-10-08 18:35:57.735796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.451 [2024-10-08 18:35:57.735812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.451 [2024-10-08 18:35:57.735819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.451 [2024-10-08 18:35:57.735986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.451 [2024-10-08 18:35:57.736152] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.451 [2024-10-08 18:35:57.736160] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.451 [2024-10-08 18:35:57.736167] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.451 [2024-10-08 18:35:57.738854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.451 [2024-10-08 18:35:57.748324] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.451 [2024-10-08 18:35:57.748769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.452 [2024-10-08 18:35:57.748822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.452 [2024-10-08 18:35:57.748846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.452 [2024-10-08 18:35:57.749410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.452 [2024-10-08 18:35:57.749578] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.452 [2024-10-08 18:35:57.749586] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.452 [2024-10-08 18:35:57.749593] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.452 [2024-10-08 18:35:57.752184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.452 [2024-10-08 18:35:57.761182] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.452 [2024-10-08 18:35:57.761616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.452 [2024-10-08 18:35:57.761660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.452 [2024-10-08 18:35:57.761683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.452 [2024-10-08 18:35:57.762260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.452 [2024-10-08 18:35:57.762683] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.452 [2024-10-08 18:35:57.762692] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.452 [2024-10-08 18:35:57.762698] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.452 [2024-10-08 18:35:57.768766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.712 [2024-10-08 18:35:57.776133] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.712 [2024-10-08 18:35:57.776595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.712 [2024-10-08 18:35:57.776638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.712 [2024-10-08 18:35:57.776661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.712 [2024-10-08 18:35:57.777237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.712 [2024-10-08 18:35:57.777710] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.712 [2024-10-08 18:35:57.777723] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.712 [2024-10-08 18:35:57.777732] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.712 [2024-10-08 18:35:57.781778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.712 [2024-10-08 18:35:57.789028] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.712 [2024-10-08 18:35:57.789434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.712 [2024-10-08 18:35:57.789451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.712 [2024-10-08 18:35:57.789458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.712 [2024-10-08 18:35:57.789624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.712 [2024-10-08 18:35:57.789791] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.712 [2024-10-08 18:35:57.789800] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.712 [2024-10-08 18:35:57.789809] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.712 [2024-10-08 18:35:57.792466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.712 [2024-10-08 18:35:57.801832] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.712 [2024-10-08 18:35:57.802243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.712 [2024-10-08 18:35:57.802259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.712 [2024-10-08 18:35:57.802266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.712 [2024-10-08 18:35:57.802439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.712 [2024-10-08 18:35:57.802606] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.712 [2024-10-08 18:35:57.802614] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.712 [2024-10-08 18:35:57.802620] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.712 [2024-10-08 18:35:57.805215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.712 [2024-10-08 18:35:57.814564] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.712 [2024-10-08 18:35:57.814986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.712 [2024-10-08 18:35:57.815002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.712 [2024-10-08 18:35:57.815009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.712 [2024-10-08 18:35:57.815175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.712 [2024-10-08 18:35:57.815341] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.712 [2024-10-08 18:35:57.815349] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.713 [2024-10-08 18:35:57.815355] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.713 [2024-10-08 18:35:57.817954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.713 [2024-10-08 18:35:57.827346] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.713 [2024-10-08 18:35:57.827748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.713 [2024-10-08 18:35:57.827764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.713 [2024-10-08 18:35:57.827771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.713 [2024-10-08 18:35:57.827937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.713 [2024-10-08 18:35:57.828104] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.713 [2024-10-08 18:35:57.828112] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.713 [2024-10-08 18:35:57.828118] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.713 [2024-10-08 18:35:57.830721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.713 [2024-10-08 18:35:57.840042] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.713 [2024-10-08 18:35:57.840437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.713 [2024-10-08 18:35:57.840456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.713 [2024-10-08 18:35:57.840462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.713 [2024-10-08 18:35:57.840620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.713 [2024-10-08 18:35:57.840777] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.713 [2024-10-08 18:35:57.840785] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.713 [2024-10-08 18:35:57.840791] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.713 [2024-10-08 18:35:57.843399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.713 [2024-10-08 18:35:57.852776] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.713 [2024-10-08 18:35:57.853198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.713 [2024-10-08 18:35:57.853240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.713 [2024-10-08 18:35:57.853263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.713 [2024-10-08 18:35:57.853736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.713 [2024-10-08 18:35:57.853904] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.713 [2024-10-08 18:35:57.853912] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.713 [2024-10-08 18:35:57.853919] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.713 [2024-10-08 18:35:57.859712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.713 [2024-10-08 18:35:57.867778] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.713 [2024-10-08 18:35:57.868270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.713 [2024-10-08 18:35:57.868291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.713 [2024-10-08 18:35:57.868302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.713 [2024-10-08 18:35:57.868562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.713 [2024-10-08 18:35:57.868816] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.713 [2024-10-08 18:35:57.868828] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.713 [2024-10-08 18:35:57.868837] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.713 [2024-10-08 18:35:57.872878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.713 [2024-10-08 18:35:57.880747] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.713 [2024-10-08 18:35:57.881144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.713 [2024-10-08 18:35:57.881160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.713 [2024-10-08 18:35:57.881168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.713 [2024-10-08 18:35:57.881334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.713 [2024-10-08 18:35:57.881509] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.713 [2024-10-08 18:35:57.881518] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.713 [2024-10-08 18:35:57.881524] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.713 [2024-10-08 18:35:57.884177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.713 [2024-10-08 18:35:57.893542] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.713 [2024-10-08 18:35:57.893933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.713 [2024-10-08 18:35:57.893948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.713 [2024-10-08 18:35:57.893955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.713 [2024-10-08 18:35:57.894121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.713 [2024-10-08 18:35:57.894287] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.713 [2024-10-08 18:35:57.894295] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.713 [2024-10-08 18:35:57.894302] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.713 [2024-10-08 18:35:57.896902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.713 [2024-10-08 18:35:57.906278] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.713 [2024-10-08 18:35:57.906688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.713 [2024-10-08 18:35:57.906704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.713 [2024-10-08 18:35:57.906711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.713 [2024-10-08 18:35:57.906877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.713 [2024-10-08 18:35:57.907043] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.713 [2024-10-08 18:35:57.907051] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.713 [2024-10-08 18:35:57.907057] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.713 [2024-10-08 18:35:57.909667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.713 [2024-10-08 18:35:57.919077] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.713 [2024-10-08 18:35:57.919491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.713 [2024-10-08 18:35:57.919508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.713 [2024-10-08 18:35:57.919515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.713 [2024-10-08 18:35:57.919681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.713 [2024-10-08 18:35:57.919848] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.713 [2024-10-08 18:35:57.919857] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.713 [2024-10-08 18:35:57.919863] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.713 [2024-10-08 18:35:57.922523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.713 [2024-10-08 18:35:57.932009] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.713 [2024-10-08 18:35:57.932385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.713 [2024-10-08 18:35:57.932402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.713 [2024-10-08 18:35:57.932409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.713 [2024-10-08 18:35:57.932575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.713 [2024-10-08 18:35:57.932742] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.713 [2024-10-08 18:35:57.932750] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.713 [2024-10-08 18:35:57.932756] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.713 [2024-10-08 18:35:57.935498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.713 [2024-10-08 18:35:57.944771] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.713 [2024-10-08 18:35:57.945160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.713 [2024-10-08 18:35:57.945176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.713 [2024-10-08 18:35:57.945182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.713 [2024-10-08 18:35:57.945339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.713 [2024-10-08 18:35:57.945525] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.714 [2024-10-08 18:35:57.945534] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.714 [2024-10-08 18:35:57.945540] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.714 [2024-10-08 18:35:57.948132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.714 [2024-10-08 18:35:57.957527] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.714 [2024-10-08 18:35:57.957911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.714 [2024-10-08 18:35:57.957925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.714 [2024-10-08 18:35:57.957932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.714 [2024-10-08 18:35:57.958089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.714 [2024-10-08 18:35:57.958246] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.714 [2024-10-08 18:35:57.958261] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.714 [2024-10-08 18:35:57.958267] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.714 [2024-10-08 18:35:57.960932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.714 [2024-10-08 18:35:57.970381] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.714 [2024-10-08 18:35:57.970726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.714 [2024-10-08 18:35:57.970742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.714 [2024-10-08 18:35:57.970753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.714 [2024-10-08 18:35:57.970924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.714 [2024-10-08 18:35:57.971094] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.714 [2024-10-08 18:35:57.971103] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.714 [2024-10-08 18:35:57.971109] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.714 [2024-10-08 18:35:57.973853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.714 [2024-10-08 18:35:57.983485] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.714 [2024-10-08 18:35:57.983840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.714 [2024-10-08 18:35:57.983856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.714 [2024-10-08 18:35:57.983864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.714 [2024-10-08 18:35:57.984030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.714 [2024-10-08 18:35:57.984196] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.714 [2024-10-08 18:35:57.984204] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.714 [2024-10-08 18:35:57.984211] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.714 [2024-10-08 18:35:57.986919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.714 [2024-10-08 18:35:57.996436] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.714 [2024-10-08 18:35:57.996882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.714 [2024-10-08 18:35:57.996938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.714 [2024-10-08 18:35:57.996962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.714 [2024-10-08 18:35:57.997554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.714 [2024-10-08 18:35:57.998094] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.714 [2024-10-08 18:35:57.998103] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.714 [2024-10-08 18:35:57.998109] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.714 [2024-10-08 18:35:58.000771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.714 [2024-10-08 18:35:58.009301] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.714 [2024-10-08 18:35:58.009602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.714 [2024-10-08 18:35:58.009618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.714 [2024-10-08 18:35:58.009625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.714 [2024-10-08 18:35:58.009792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.714 [2024-10-08 18:35:58.009959] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.714 [2024-10-08 18:35:58.009970] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.714 [2024-10-08 18:35:58.009977] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.714 [2024-10-08 18:35:58.012579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.714 [2024-10-08 18:35:58.022126] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.714 [2024-10-08 18:35:58.022541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.714 [2024-10-08 18:35:58.022558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.714 [2024-10-08 18:35:58.022566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.714 [2024-10-08 18:35:58.022732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.714 [2024-10-08 18:35:58.022899] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.714 [2024-10-08 18:35:58.022907] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.714 [2024-10-08 18:35:58.022913] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.714 [2024-10-08 18:35:58.025646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.975 [2024-10-08 18:35:58.035154] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.975 [2024-10-08 18:35:58.035589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.975 [2024-10-08 18:35:58.035606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.975 [2024-10-08 18:35:58.035614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.975 [2024-10-08 18:35:58.035780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.975 [2024-10-08 18:35:58.035947] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.975 [2024-10-08 18:35:58.035955] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.975 [2024-10-08 18:35:58.035962] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.975 [2024-10-08 18:35:58.038627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.975 [2024-10-08 18:35:58.048006] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.975 [2024-10-08 18:35:58.048368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.975 [2024-10-08 18:35:58.048390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.975 [2024-10-08 18:35:58.048397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.975 [2024-10-08 18:35:58.048564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.975 [2024-10-08 18:35:58.048730] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.975 [2024-10-08 18:35:58.048739] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.975 [2024-10-08 18:35:58.048746] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.975 [2024-10-08 18:35:58.051410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.975 [2024-10-08 18:35:58.060887] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.975 [2024-10-08 18:35:58.061256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.975 [2024-10-08 18:35:58.061271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.975 [2024-10-08 18:35:58.061278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.975 [2024-10-08 18:35:58.061452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.975 [2024-10-08 18:35:58.061620] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.975 [2024-10-08 18:35:58.061628] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.975 [2024-10-08 18:35:58.061634] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.975 [2024-10-08 18:35:58.064267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.975 [2024-10-08 18:35:58.073848] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.975 [2024-10-08 18:35:58.074283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.975 [2024-10-08 18:35:58.074299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.975 [2024-10-08 18:35:58.074307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.975 [2024-10-08 18:35:58.074479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.975 [2024-10-08 18:35:58.074646] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.975 [2024-10-08 18:35:58.074654] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.975 [2024-10-08 18:35:58.074660] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.975 [2024-10-08 18:35:58.077321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.975 [2024-10-08 18:35:58.086735] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.975 [2024-10-08 18:35:58.087145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.975 [2024-10-08 18:35:58.087188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.975 [2024-10-08 18:35:58.087212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.975 [2024-10-08 18:35:58.087812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.975 [2024-10-08 18:35:58.088358] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.975 [2024-10-08 18:35:58.088366] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.975 [2024-10-08 18:35:58.088373] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.975 [2024-10-08 18:35:58.090972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.975 [2024-10-08 18:35:58.099511] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.975 [2024-10-08 18:35:58.099932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.975 [2024-10-08 18:35:58.099948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.975 [2024-10-08 18:35:58.099955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.975 [2024-10-08 18:35:58.100125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.975 [2024-10-08 18:35:58.100291] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.975 [2024-10-08 18:35:58.100299] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.975 [2024-10-08 18:35:58.100306] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.975 [2024-10-08 18:35:58.102908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.975 [2024-10-08 18:35:58.112442] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.975 [2024-10-08 18:35:58.112722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.975 [2024-10-08 18:35:58.112737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.975 [2024-10-08 18:35:58.112744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.975 [2024-10-08 18:35:58.112910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.975 [2024-10-08 18:35:58.113077] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.975 [2024-10-08 18:35:58.113086] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.975 [2024-10-08 18:35:58.113092] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.975 [2024-10-08 18:35:58.115807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.975 [2024-10-08 18:35:58.125378] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.975 [2024-10-08 18:35:58.125667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.975 [2024-10-08 18:35:58.125683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.975 [2024-10-08 18:35:58.125690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.975 [2024-10-08 18:35:58.125857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.975 [2024-10-08 18:35:58.126023] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.975 [2024-10-08 18:35:58.126032] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.975 [2024-10-08 18:35:58.126038] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.975 [2024-10-08 18:35:58.128732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.975 [2024-10-08 18:35:58.138290] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.975 [2024-10-08 18:35:58.138591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.975 [2024-10-08 18:35:58.138607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.975 [2024-10-08 18:35:58.138614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.975 [2024-10-08 18:35:58.138781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.975 [2024-10-08 18:35:58.138947] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.975 [2024-10-08 18:35:58.138956] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.975 [2024-10-08 18:35:58.138965] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.975 [2024-10-08 18:35:58.141654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.975 [2024-10-08 18:35:58.151208] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.975 [2024-10-08 18:35:58.151520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.975 [2024-10-08 18:35:58.151536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.975 [2024-10-08 18:35:58.151544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.976 [2024-10-08 18:35:58.151720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.976 [2024-10-08 18:35:58.151887] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.976 [2024-10-08 18:35:58.151895] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.976 [2024-10-08 18:35:58.151901] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.976 [2024-10-08 18:35:58.154522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.976 [2024-10-08 18:35:58.164035] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.976 [2024-10-08 18:35:58.164450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.976 [2024-10-08 18:35:58.164467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.976 [2024-10-08 18:35:58.164474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.976 [2024-10-08 18:35:58.164647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.976 [2024-10-08 18:35:58.164805] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.976 [2024-10-08 18:35:58.164813] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.976 [2024-10-08 18:35:58.164818] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.976 [2024-10-08 18:35:58.167430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.976 [2024-10-08 18:35:58.176851] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.976 [2024-10-08 18:35:58.177306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.976 [2024-10-08 18:35:58.177321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.976 [2024-10-08 18:35:58.177329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.976 [2024-10-08 18:35:58.177500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.976 [2024-10-08 18:35:58.177667] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.976 [2024-10-08 18:35:58.177676] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.976 [2024-10-08 18:35:58.177682] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.976 [2024-10-08 18:35:58.180275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.976 [2024-10-08 18:35:58.189661] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.976 [2024-10-08 18:35:58.190003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.976 [2024-10-08 18:35:58.190033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.976 [2024-10-08 18:35:58.190057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.976 [2024-10-08 18:35:58.190610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.976 [2024-10-08 18:35:58.190793] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.976 [2024-10-08 18:35:58.190802] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.976 [2024-10-08 18:35:58.190809] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.976 [2024-10-08 18:35:58.193411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.976 [2024-10-08 18:35:58.202419] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.976 [2024-10-08 18:35:58.202753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.976 [2024-10-08 18:35:58.202769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.976 [2024-10-08 18:35:58.202776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.976 [2024-10-08 18:35:58.202942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.976 [2024-10-08 18:35:58.203109] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.976 [2024-10-08 18:35:58.203117] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.976 [2024-10-08 18:35:58.203124] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.976 [2024-10-08 18:35:58.205731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.976 [2024-10-08 18:35:58.215207] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.976 [2024-10-08 18:35:58.215599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.976 [2024-10-08 18:35:58.215615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.976 [2024-10-08 18:35:58.215622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.976 [2024-10-08 18:35:58.215788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.976 [2024-10-08 18:35:58.215955] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.976 [2024-10-08 18:35:58.215963] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.976 [2024-10-08 18:35:58.215969] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.976 [2024-10-08 18:35:58.218585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.976 [2024-10-08 18:35:58.228076] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.976 [2024-10-08 18:35:58.228497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.976 [2024-10-08 18:35:58.228514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.976 [2024-10-08 18:35:58.228521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.976 [2024-10-08 18:35:58.228692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.976 [2024-10-08 18:35:58.228869] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.976 [2024-10-08 18:35:58.228877] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.976 [2024-10-08 18:35:58.228884] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.976 [2024-10-08 18:35:58.231642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.976 [2024-10-08 18:35:58.241149] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.976 [2024-10-08 18:35:58.241541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.976 [2024-10-08 18:35:58.241558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.976 [2024-10-08 18:35:58.241566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.976 [2024-10-08 18:35:58.241736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.976 [2024-10-08 18:35:58.241917] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.976 [2024-10-08 18:35:58.241925] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.976 [2024-10-08 18:35:58.241931] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.976 [2024-10-08 18:35:58.244551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.976 [2024-10-08 18:35:58.254039] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.976 [2024-10-08 18:35:58.254471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.976 [2024-10-08 18:35:58.254516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.976 [2024-10-08 18:35:58.254540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.976 [2024-10-08 18:35:58.254786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.976 [2024-10-08 18:35:58.254953] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.976 [2024-10-08 18:35:58.254963] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.976 [2024-10-08 18:35:58.254969] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.976 [2024-10-08 18:35:58.257567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.976 [2024-10-08 18:35:58.266887] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.976 [2024-10-08 18:35:58.267303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.976 [2024-10-08 18:35:58.267318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.976 [2024-10-08 18:35:58.267326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.976 [2024-10-08 18:35:58.267496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.976 [2024-10-08 18:35:58.267663] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.976 [2024-10-08 18:35:58.267672] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.976 [2024-10-08 18:35:58.267678] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.976 [2024-10-08 18:35:58.270278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.976 [2024-10-08 18:35:58.279655] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.976 [2024-10-08 18:35:58.280073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.976 [2024-10-08 18:35:58.280089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.976 [2024-10-08 18:35:58.280096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.976 [2024-10-08 18:35:58.280262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.976 [2024-10-08 18:35:58.280433] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.976 [2024-10-08 18:35:58.280441] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.976 [2024-10-08 18:35:58.280448] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.976 [2024-10-08 18:35:58.283038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.976 [2024-10-08 18:35:58.292548] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.976 [2024-10-08 18:35:58.292917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.977 [2024-10-08 18:35:58.292960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:04.977 [2024-10-08 18:35:58.292983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:04.977 [2024-10-08 18:35:58.293573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:04.977 [2024-10-08 18:35:58.293801] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.977 [2024-10-08 18:35:58.293810] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.977 [2024-10-08 18:35:58.293816] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.296506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.305344] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.305693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.305708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.305716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.305882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.306048] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.306056] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.306062] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.308660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.318124] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.318532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.318551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.318559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.318724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.318890] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.318898] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.318905] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.321537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.330872] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.331265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.331281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.331288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.331460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.331627] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.331635] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.331641] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 7381.75 IOPS, 28.83 MiB/s [2024-10-08T16:35:58.561Z] [2024-10-08 18:35:58.335476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.343667] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.344019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.344035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.344042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.344208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.344382] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.344391] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.344397] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.346989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.356579] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.356919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.356958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.356983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.357573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.358104] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.358113] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.358120] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.360729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.369434] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.369778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.369794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.369801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.369968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.370134] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.370143] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.370149] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.372837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.382406] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.382752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.382767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.382774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.382941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.383107] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.383116] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.383122] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.385756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.395275] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.395617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.395633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.395640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.395806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.395973] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.395981] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.395988] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.398679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.408146] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.408552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.408597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.408621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.409108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.409319] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.409336] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.409350] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.415570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.423201] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.423649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.423670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.423681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.423932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.424187] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.424198] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.424208] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.428250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.436210] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.436641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.436686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.436709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.437238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.437415] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.437424] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.437431] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.440158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.449198] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.449571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.449587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.449597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.449763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.449929] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.449938] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.449944] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.452661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.462108] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.462506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.462522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.462529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.462695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.462863] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.462872] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.462878] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.465538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.474864] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.475285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.475328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.475352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.475943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.476471] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.476479] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.476486] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.479078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.487833] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.488238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.488254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.488262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.488437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.488609] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.488621] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.488628] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.491360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.500752] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.501151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.501167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.501174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.501340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.501512] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.501520] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.501526] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.504121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.513466] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.513877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.513892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.513899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.514065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.514231] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.514239] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.514246] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.516846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.526306] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.526717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.526732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.238 [2024-10-08 18:35:58.526740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.238 [2024-10-08 18:35:58.526906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.238 [2024-10-08 18:35:58.527071] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.238 [2024-10-08 18:35:58.527079] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.238 [2024-10-08 18:35:58.527085] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.238 [2024-10-08 18:35:58.529693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.238 [2024-10-08 18:35:58.539120] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.238 [2024-10-08 18:35:58.539526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-10-08 18:35:58.539543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.239 [2024-10-08 18:35:58.539551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.239 [2024-10-08 18:35:58.539717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.239 [2024-10-08 18:35:58.539884] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.239 [2024-10-08 18:35:58.539892] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.239 [2024-10-08 18:35:58.539899] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.239 [2024-10-08 18:35:58.542510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.239 [2024-10-08 18:35:58.551837] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.239 [2024-10-08 18:35:58.552224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-10-08 18:35:58.552239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.239 [2024-10-08 18:35:58.552246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.239 [2024-10-08 18:35:58.552425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.239 [2024-10-08 18:35:58.552592] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.239 [2024-10-08 18:35:58.552600] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.239 [2024-10-08 18:35:58.552606] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.239 [2024-10-08 18:35:58.555309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.498 [2024-10-08 18:35:58.564747] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.498 [2024-10-08 18:35:58.565165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-08 18:35:58.565181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.498 [2024-10-08 18:35:58.565189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.498 [2024-10-08 18:35:58.565356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.498 [2024-10-08 18:35:58.565528] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.498 [2024-10-08 18:35:58.565537] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.498 [2024-10-08 18:35:58.565543] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.498 [2024-10-08 18:35:58.568137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.498 [2024-10-08 18:35:58.577579] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.498 [2024-10-08 18:35:58.577987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-08 18:35:58.578003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.498 [2024-10-08 18:35:58.578011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.498 [2024-10-08 18:35:58.578180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.498 [2024-10-08 18:35:58.578347] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.498 [2024-10-08 18:35:58.578355] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.498 [2024-10-08 18:35:58.578361] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.498 [2024-10-08 18:35:58.580959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.498 [2024-10-08 18:35:58.590390] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.498 [2024-10-08 18:35:58.590757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-08 18:35:58.590772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.498 [2024-10-08 18:35:58.590779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.498 [2024-10-08 18:35:58.590936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.499 [2024-10-08 18:35:58.591094] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.499 [2024-10-08 18:35:58.591102] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.499 [2024-10-08 18:35:58.591107] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.499 [2024-10-08 18:35:58.593712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.499 [2024-10-08 18:35:58.603231] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.499 [2024-10-08 18:35:58.603647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-08 18:35:58.603662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.499 [2024-10-08 18:35:58.603670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.499 [2024-10-08 18:35:58.603836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.499 [2024-10-08 18:35:58.604001] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.499 [2024-10-08 18:35:58.604009] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.499 [2024-10-08 18:35:58.604015] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.499 [2024-10-08 18:35:58.606627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.499 [2024-10-08 18:35:58.616006] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.499 [2024-10-08 18:35:58.616420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-08 18:35:58.616436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.499 [2024-10-08 18:35:58.616443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.499 [2024-10-08 18:35:58.616609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.499 [2024-10-08 18:35:58.616775] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.499 [2024-10-08 18:35:58.616783] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.499 [2024-10-08 18:35:58.616792] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.499 [2024-10-08 18:35:58.619419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.499 [2024-10-08 18:35:58.628914] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.499 [2024-10-08 18:35:58.629302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-08 18:35:58.629317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.499 [2024-10-08 18:35:58.629324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.499 [2024-10-08 18:35:58.629506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.499 [2024-10-08 18:35:58.629673] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.499 [2024-10-08 18:35:58.629681] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.499 [2024-10-08 18:35:58.629688] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.499 [2024-10-08 18:35:58.632347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.499 [2024-10-08 18:35:58.641840] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.499 [2024-10-08 18:35:58.642269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-08 18:35:58.642316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.499 [2024-10-08 18:35:58.642341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.499 [2024-10-08 18:35:58.642814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.499 [2024-10-08 18:35:58.642982] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.499 [2024-10-08 18:35:58.642990] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.499 [2024-10-08 18:35:58.642997] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.499 [2024-10-08 18:35:58.645592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.499 [2024-10-08 18:35:58.654612] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.499 [2024-10-08 18:35:58.654918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-08 18:35:58.654934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.499 [2024-10-08 18:35:58.654941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.499 [2024-10-08 18:35:58.655099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.499 [2024-10-08 18:35:58.655257] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.499 [2024-10-08 18:35:58.655265] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.499 [2024-10-08 18:35:58.655271] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.499 [2024-10-08 18:35:58.657880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.499 [2024-10-08 18:35:58.667415] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.499 [2024-10-08 18:35:58.667844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-08 18:35:58.667888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.499 [2024-10-08 18:35:58.667911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.499 [2024-10-08 18:35:58.668503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.499 [2024-10-08 18:35:58.668721] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.499 [2024-10-08 18:35:58.668729] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.499 [2024-10-08 18:35:58.668735] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.499 [2024-10-08 18:35:58.671328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.499 [2024-10-08 18:35:58.680245] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.499 [2024-10-08 18:35:58.680658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-08 18:35:58.680674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.499 [2024-10-08 18:35:58.680681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.499 [2024-10-08 18:35:58.680847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.499 [2024-10-08 18:35:58.681013] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.499 [2024-10-08 18:35:58.681021] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.499 [2024-10-08 18:35:58.681028] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.499 [2024-10-08 18:35:58.683636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.499 [2024-10-08 18:35:58.693037] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.499 [2024-10-08 18:35:58.693454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-08 18:35:58.693499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.499 [2024-10-08 18:35:58.693522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.499 [2024-10-08 18:35:58.693955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.499 [2024-10-08 18:35:58.694122] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.499 [2024-10-08 18:35:58.694130] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.499 [2024-10-08 18:35:58.694136] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.499 [2024-10-08 18:35:58.696741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.499 [2024-10-08 18:35:58.705847] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.499 [2024-10-08 18:35:58.706214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-08 18:35:58.706229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.499 [2024-10-08 18:35:58.706236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.499 [2024-10-08 18:35:58.706416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.499 [2024-10-08 18:35:58.706587] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.499 [2024-10-08 18:35:58.706595] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.499 [2024-10-08 18:35:58.706601] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.499 [2024-10-08 18:35:58.709200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.499 [2024-10-08 18:35:58.718660] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.499 [2024-10-08 18:35:58.719037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-08 18:35:58.719080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.499 [2024-10-08 18:35:58.719104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.499 [2024-10-08 18:35:58.719693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.499 [2024-10-08 18:35:58.720145] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.499 [2024-10-08 18:35:58.720153] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.499 [2024-10-08 18:35:58.720159] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.499 [2024-10-08 18:35:58.722819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.499 [2024-10-08 18:35:58.731539] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.500 [2024-10-08 18:35:58.731902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-08 18:35:58.731919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.500 [2024-10-08 18:35:58.731926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.500 [2024-10-08 18:35:58.732092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.500 [2024-10-08 18:35:58.732260] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.500 [2024-10-08 18:35:58.732269] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.500 [2024-10-08 18:35:58.732275] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.500 [2024-10-08 18:35:58.734998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.500 [2024-10-08 18:35:58.744605] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.500 [2024-10-08 18:35:58.744930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-08 18:35:58.744946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.500 [2024-10-08 18:35:58.744954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.500 [2024-10-08 18:35:58.745119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.500 [2024-10-08 18:35:58.745285] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.500 [2024-10-08 18:35:58.745294] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.500 [2024-10-08 18:35:58.745300] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.500 [2024-10-08 18:35:58.748000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.500 [2024-10-08 18:35:58.757405] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.500 [2024-10-08 18:35:58.757848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-08 18:35:58.757887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.500 [2024-10-08 18:35:58.757912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.500 [2024-10-08 18:35:58.758499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.500 [2024-10-08 18:35:58.759043] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.500 [2024-10-08 18:35:58.759052] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.500 [2024-10-08 18:35:58.759058] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.500 [2024-10-08 18:35:58.761755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.500 [2024-10-08 18:35:58.770324] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.500 [2024-10-08 18:35:58.770733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-08 18:35:58.770777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.500 [2024-10-08 18:35:58.770801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.500 [2024-10-08 18:35:58.771392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.500 [2024-10-08 18:35:58.771975] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.500 [2024-10-08 18:35:58.771983] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.500 [2024-10-08 18:35:58.771989] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.500 [2024-10-08 18:35:58.774655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.500 [2024-10-08 18:35:58.783243] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.500 [2024-10-08 18:35:58.783614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-08 18:35:58.783657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.500 [2024-10-08 18:35:58.783680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.500 [2024-10-08 18:35:58.784190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.500 [2024-10-08 18:35:58.784357] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.500 [2024-10-08 18:35:58.784364] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.500 [2024-10-08 18:35:58.784371] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.500 [2024-10-08 18:35:58.787031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.500 [2024-10-08 18:35:58.796060] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.500 [2024-10-08 18:35:58.796421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-08 18:35:58.796441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.500 [2024-10-08 18:35:58.796448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.500 [2024-10-08 18:35:58.796614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.500 [2024-10-08 18:35:58.796781] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.500 [2024-10-08 18:35:58.796789] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.500 [2024-10-08 18:35:58.796795] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.500 [2024-10-08 18:35:58.799535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.500 [2024-10-08 18:35:58.808853] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.500 [2024-10-08 18:35:58.809268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-08 18:35:58.809310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.500 [2024-10-08 18:35:58.809334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.500 [2024-10-08 18:35:58.809859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.500 [2024-10-08 18:35:58.810026] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.500 [2024-10-08 18:35:58.810034] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.500 [2024-10-08 18:35:58.810040] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.500 [2024-10-08 18:35:58.812635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.760 [2024-10-08 18:35:58.821805] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.760 [2024-10-08 18:35:58.822248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.760 [2024-10-08 18:35:58.822291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.760 [2024-10-08 18:35:58.822314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.760 [2024-10-08 18:35:58.822904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.760 [2024-10-08 18:35:58.823108] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.760 [2024-10-08 18:35:58.823116] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.760 [2024-10-08 18:35:58.823122] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.760 [2024-10-08 18:35:58.825808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.760 [2024-10-08 18:35:58.834516] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.760 [2024-10-08 18:35:58.834942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.760 [2024-10-08 18:35:58.834984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.760 [2024-10-08 18:35:58.835007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.760 [2024-10-08 18:35:58.835602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.760 [2024-10-08 18:35:58.836189] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.760 [2024-10-08 18:35:58.836197] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.760 [2024-10-08 18:35:58.836203] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.760 [2024-10-08 18:35:58.838806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.760 [2024-10-08 18:35:58.847235] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.760 [2024-10-08 18:35:58.847661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.760 [2024-10-08 18:35:58.847677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.760 [2024-10-08 18:35:58.847684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.760 [2024-10-08 18:35:58.847851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.760 [2024-10-08 18:35:58.848017] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.760 [2024-10-08 18:35:58.848025] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.760 [2024-10-08 18:35:58.848032] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.760 [2024-10-08 18:35:58.850643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.760 [2024-10-08 18:35:58.860054] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.760 [2024-10-08 18:35:58.860466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.760 [2024-10-08 18:35:58.860481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.760 [2024-10-08 18:35:58.860488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.760 [2024-10-08 18:35:58.860645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.760 [2024-10-08 18:35:58.860803] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.760 [2024-10-08 18:35:58.860810] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.760 [2024-10-08 18:35:58.860816] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.760 [2024-10-08 18:35:58.863427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.760 [2024-10-08 18:35:58.872802] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.760 [2024-10-08 18:35:58.873242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.760 [2024-10-08 18:35:58.873286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.760 [2024-10-08 18:35:58.873310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.760 [2024-10-08 18:35:58.873900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.760 [2024-10-08 18:35:58.874160] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.760 [2024-10-08 18:35:58.874168] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.760 [2024-10-08 18:35:58.874175] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.760 [2024-10-08 18:35:58.880326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.760 [2024-10-08 18:35:58.887692] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.760 [2024-10-08 18:35:58.888104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.760 [2024-10-08 18:35:58.888125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.760 [2024-10-08 18:35:58.888136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.760 [2024-10-08 18:35:58.888394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.760 [2024-10-08 18:35:58.888650] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.760 [2024-10-08 18:35:58.888661] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.760 [2024-10-08 18:35:58.888671] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.760 [2024-10-08 18:35:58.892709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.760 [2024-10-08 18:35:58.900644] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.760 [2024-10-08 18:35:58.901076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.760 [2024-10-08 18:35:58.901107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.761 [2024-10-08 18:35:58.901130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.761 [2024-10-08 18:35:58.901722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.761 [2024-10-08 18:35:58.902307] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.761 [2024-10-08 18:35:58.902332] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.761 [2024-10-08 18:35:58.902363] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.761 [2024-10-08 18:35:58.905052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.761 [2024-10-08 18:35:58.913452] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.761 [2024-10-08 18:35:58.913867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.761 [2024-10-08 18:35:58.913883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.761 [2024-10-08 18:35:58.913890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.761 [2024-10-08 18:35:58.914047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.761 [2024-10-08 18:35:58.914204] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.761 [2024-10-08 18:35:58.914212] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.761 [2024-10-08 18:35:58.914218] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.761 [2024-10-08 18:35:58.916825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.761 [2024-10-08 18:35:58.926291] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.761 [2024-10-08 18:35:58.926722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.761 [2024-10-08 18:35:58.926738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.761 [2024-10-08 18:35:58.926748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.761 [2024-10-08 18:35:58.926914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.761 [2024-10-08 18:35:58.927080] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.761 [2024-10-08 18:35:58.927089] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.761 [2024-10-08 18:35:58.927095] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.761 [2024-10-08 18:35:58.929700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.761 [2024-10-08 18:35:58.939048] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.761 [2024-10-08 18:35:58.939486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.761 [2024-10-08 18:35:58.939530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.761 [2024-10-08 18:35:58.939554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.761 [2024-10-08 18:35:58.940132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.761 [2024-10-08 18:35:58.940566] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.761 [2024-10-08 18:35:58.940575] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.761 [2024-10-08 18:35:58.940582] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.761 [2024-10-08 18:35:58.943241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.761 [2024-10-08 18:35:58.951950] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.761 [2024-10-08 18:35:58.952380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.761 [2024-10-08 18:35:58.952397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.761 [2024-10-08 18:35:58.952404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.761 [2024-10-08 18:35:58.952570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.761 [2024-10-08 18:35:58.952736] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.761 [2024-10-08 18:35:58.952744] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.761 [2024-10-08 18:35:58.952750] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.761 [2024-10-08 18:35:58.955457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.761 [2024-10-08 18:35:58.964761] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.761 [2024-10-08 18:35:58.965154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.761 [2024-10-08 18:35:58.965168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.761 [2024-10-08 18:35:58.965175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.761 [2024-10-08 18:35:58.965332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.761 [2024-10-08 18:35:58.965516] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.761 [2024-10-08 18:35:58.965527] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.761 [2024-10-08 18:35:58.965534] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.761 [2024-10-08 18:35:58.968186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.761 [2024-10-08 18:35:58.977497] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.761 [2024-10-08 18:35:58.977906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.761 [2024-10-08 18:35:58.977921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.761 [2024-10-08 18:35:58.977928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.761 [2024-10-08 18:35:58.978085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.761 [2024-10-08 18:35:58.978243] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.761 [2024-10-08 18:35:58.978251] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.761 [2024-10-08 18:35:58.978257] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.761 [2024-10-08 18:35:58.980865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.761 [2024-10-08 18:35:58.990244] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.761 [2024-10-08 18:35:58.990676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.761 [2024-10-08 18:35:58.990692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.761 [2024-10-08 18:35:58.990699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.761 [2024-10-08 18:35:58.990865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.761 [2024-10-08 18:35:58.991031] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.761 [2024-10-08 18:35:58.991039] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.761 [2024-10-08 18:35:58.991045] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.761 [2024-10-08 18:35:58.993809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.761 [2024-10-08 18:35:59.003348] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.761 [2024-10-08 18:35:59.003814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.761 [2024-10-08 18:35:59.003858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.761 [2024-10-08 18:35:59.003882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.761 [2024-10-08 18:35:59.004471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.761 [2024-10-08 18:35:59.004639] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.761 [2024-10-08 18:35:59.004647] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.761 [2024-10-08 18:35:59.004653] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.761 [2024-10-08 18:35:59.007307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.761 [2024-10-08 18:35:59.016191] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.761 [2024-10-08 18:35:59.016619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.761 [2024-10-08 18:35:59.016635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.761 [2024-10-08 18:35:59.016642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.761 [2024-10-08 18:35:59.016808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.761 [2024-10-08 18:35:59.016974] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.761 [2024-10-08 18:35:59.016982] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.761 [2024-10-08 18:35:59.016988] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.761 [2024-10-08 18:35:59.019600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.761 [2024-10-08 18:35:59.028939] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.761 [2024-10-08 18:35:59.029352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.761 [2024-10-08 18:35:59.029368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.761 [2024-10-08 18:35:59.029379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.761 [2024-10-08 18:35:59.029561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.761 [2024-10-08 18:35:59.029727] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.761 [2024-10-08 18:35:59.029734] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.761 [2024-10-08 18:35:59.029741] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.761 [2024-10-08 18:35:59.032333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.761 [2024-10-08 18:35:59.041776] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.762 [2024-10-08 18:35:59.042200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.762 [2024-10-08 18:35:59.042243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.762 [2024-10-08 18:35:59.042267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.762 [2024-10-08 18:35:59.042754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.762 [2024-10-08 18:35:59.042922] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.762 [2024-10-08 18:35:59.042930] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.762 [2024-10-08 18:35:59.042936] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.762 [2024-10-08 18:35:59.045531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.762 [2024-10-08 18:35:59.054579] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.762 [2024-10-08 18:35:59.055019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.762 [2024-10-08 18:35:59.055052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.762 [2024-10-08 18:35:59.055077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.762 [2024-10-08 18:35:59.055661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.762 [2024-10-08 18:35:59.055828] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.762 [2024-10-08 18:35:59.055836] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.762 [2024-10-08 18:35:59.055842] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.762 [2024-10-08 18:35:59.062018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.762 [2024-10-08 18:35:59.069536] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.762 [2024-10-08 18:35:59.070047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.762 [2024-10-08 18:35:59.070068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:05.762 [2024-10-08 18:35:59.070079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:05.762 [2024-10-08 18:35:59.070331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:05.762 [2024-10-08 18:35:59.070590] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.762 [2024-10-08 18:35:59.070602] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.762 [2024-10-08 18:35:59.070612] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.762 [2024-10-08 18:35:59.074654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.022 [2024-10-08 18:35:59.082616] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.022 [2024-10-08 18:35:59.083043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-10-08 18:35:59.083059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.022 [2024-10-08 18:35:59.083067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.022 [2024-10-08 18:35:59.083239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.022 [2024-10-08 18:35:59.083415] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.022 [2024-10-08 18:35:59.083424] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.022 [2024-10-08 18:35:59.083431] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.022 [2024-10-08 18:35:59.086163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.022 [2024-10-08 18:35:59.095430] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.022 [2024-10-08 18:35:59.095733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-10-08 18:35:59.095748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.022 [2024-10-08 18:35:59.095755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.022 [2024-10-08 18:35:59.095912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.022 [2024-10-08 18:35:59.096070] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.022 [2024-10-08 18:35:59.096078] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.022 [2024-10-08 18:35:59.096087] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.022 [2024-10-08 18:35:59.098684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.022 [2024-10-08 18:35:59.108172] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.022 [2024-10-08 18:35:59.108620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-10-08 18:35:59.108664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.022 [2024-10-08 18:35:59.108688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.022 [2024-10-08 18:35:59.109264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.022 [2024-10-08 18:35:59.109795] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.022 [2024-10-08 18:35:59.109804] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.022 [2024-10-08 18:35:59.109810] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.022 [2024-10-08 18:35:59.112402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.022 [2024-10-08 18:35:59.120987] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.022 [2024-10-08 18:35:59.121400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-10-08 18:35:59.121415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.022 [2024-10-08 18:35:59.121422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.022 [2024-10-08 18:35:59.121579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.022 [2024-10-08 18:35:59.121736] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.022 [2024-10-08 18:35:59.121744] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.022 [2024-10-08 18:35:59.121750] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.022 [2024-10-08 18:35:59.124391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.022 [2024-10-08 18:35:59.133757] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.022 [2024-10-08 18:35:59.134191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-10-08 18:35:59.134220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.022 [2024-10-08 18:35:59.134245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.022 [2024-10-08 18:35:59.134845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.022 [2024-10-08 18:35:59.135014] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.022 [2024-10-08 18:35:59.135022] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.022 [2024-10-08 18:35:59.135028] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.022 [2024-10-08 18:35:59.137631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.022 [2024-10-08 18:35:59.146478] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.022 [2024-10-08 18:35:59.146910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-10-08 18:35:59.146952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.022 [2024-10-08 18:35:59.146975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.022 [2024-10-08 18:35:59.147569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.022 [2024-10-08 18:35:59.148040] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.022 [2024-10-08 18:35:59.148048] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.022 [2024-10-08 18:35:59.148054] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.022 [2024-10-08 18:35:59.150688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.022 [2024-10-08 18:35:59.159201] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.022 [2024-10-08 18:35:59.159617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-10-08 18:35:59.159633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.022 [2024-10-08 18:35:59.159640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.022 [2024-10-08 18:35:59.159807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.022 [2024-10-08 18:35:59.159973] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.022 [2024-10-08 18:35:59.159981] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.022 [2024-10-08 18:35:59.159987] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.022 [2024-10-08 18:35:59.162600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.022 [2024-10-08 18:35:59.171939] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.022 [2024-10-08 18:35:59.172352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-10-08 18:35:59.172367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.022 [2024-10-08 18:35:59.172379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.022 [2024-10-08 18:35:59.172560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.022 [2024-10-08 18:35:59.172727] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.022 [2024-10-08 18:35:59.172735] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.022 [2024-10-08 18:35:59.172741] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.022 [2024-10-08 18:35:59.175401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.022 [2024-10-08 18:35:59.184731] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.022 [2024-10-08 18:35:59.185144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-10-08 18:35:59.185186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.022 [2024-10-08 18:35:59.185210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.022 [2024-10-08 18:35:59.185725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.022 [2024-10-08 18:35:59.185896] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.022 [2024-10-08 18:35:59.185904] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.022 [2024-10-08 18:35:59.185910] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.022 [2024-10-08 18:35:59.188527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.022 [2024-10-08 18:35:59.197472] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.023 [2024-10-08 18:35:59.197899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-10-08 18:35:59.197942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.023 [2024-10-08 18:35:59.197966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.023 [2024-10-08 18:35:59.198559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.023 [2024-10-08 18:35:59.199106] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.023 [2024-10-08 18:35:59.199114] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.023 [2024-10-08 18:35:59.199120] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.023 [2024-10-08 18:35:59.201751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.023 [2024-10-08 18:35:59.210182] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.023 [2024-10-08 18:35:59.210595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-10-08 18:35:59.210611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.023 [2024-10-08 18:35:59.210618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.023 [2024-10-08 18:35:59.210775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.023 [2024-10-08 18:35:59.210933] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.023 [2024-10-08 18:35:59.210940] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.023 [2024-10-08 18:35:59.210947] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.023 [2024-10-08 18:35:59.213610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.023 [2024-10-08 18:35:59.222943] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.023 [2024-10-08 18:35:59.223366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-10-08 18:35:59.223422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.023 [2024-10-08 18:35:59.223446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.023 [2024-10-08 18:35:59.223877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.023 [2024-10-08 18:35:59.224044] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.023 [2024-10-08 18:35:59.224052] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.023 [2024-10-08 18:35:59.224058] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.023 [2024-10-08 18:35:59.226709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.023 [2024-10-08 18:35:59.235665] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.023 [2024-10-08 18:35:59.236080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-10-08 18:35:59.236095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.023 [2024-10-08 18:35:59.236101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.023 [2024-10-08 18:35:59.236259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.023 [2024-10-08 18:35:59.236438] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.023 [2024-10-08 18:35:59.236447] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.023 [2024-10-08 18:35:59.236454] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.023 [2024-10-08 18:35:59.239053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.023 [2024-10-08 18:35:59.248416] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.023 [2024-10-08 18:35:59.248757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-10-08 18:35:59.248772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.023 [2024-10-08 18:35:59.248780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.023 [2024-10-08 18:35:59.248946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.023 [2024-10-08 18:35:59.249112] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.023 [2024-10-08 18:35:59.249120] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.023 [2024-10-08 18:35:59.249127] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.023 [2024-10-08 18:35:59.251896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.023 [2024-10-08 18:35:59.261439] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.023 [2024-10-08 18:35:59.261766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-10-08 18:35:59.261783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.023 [2024-10-08 18:35:59.261790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.023 [2024-10-08 18:35:59.261968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.023 [2024-10-08 18:35:59.262135] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.023 [2024-10-08 18:35:59.262144] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.023 [2024-10-08 18:35:59.262150] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.023 [2024-10-08 18:35:59.264794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.023 [2024-10-08 18:35:59.274299] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.023 [2024-10-08 18:35:59.274677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-10-08 18:35:59.274694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.023 [2024-10-08 18:35:59.274704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.023 [2024-10-08 18:35:59.274876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.023 [2024-10-08 18:35:59.275063] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.023 [2024-10-08 18:35:59.275071] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.023 [2024-10-08 18:35:59.275077] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.023 [2024-10-08 18:35:59.277674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.023 [2024-10-08 18:35:59.287144] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.023 [2024-10-08 18:35:59.287510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-10-08 18:35:59.287526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.023 [2024-10-08 18:35:59.287534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.023 [2024-10-08 18:35:59.287713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.023 [2024-10-08 18:35:59.287879] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.023 [2024-10-08 18:35:59.287887] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.023 [2024-10-08 18:35:59.287894] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.023 [2024-10-08 18:35:59.290516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.023 [2024-10-08 18:35:59.299920] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.023 [2024-10-08 18:35:59.300339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-10-08 18:35:59.300394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.023 [2024-10-08 18:35:59.300420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.023 [2024-10-08 18:35:59.300998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.023 [2024-10-08 18:35:59.301423] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.023 [2024-10-08 18:35:59.301432] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.023 [2024-10-08 18:35:59.301439] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.023 [2024-10-08 18:35:59.304034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.023 [2024-10-08 18:35:59.312747] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.023 [2024-10-08 18:35:59.313149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-10-08 18:35:59.313193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.023 [2024-10-08 18:35:59.313216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.023 [2024-10-08 18:35:59.313808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.023 [2024-10-08 18:35:59.314391] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.023 [2024-10-08 18:35:59.314403] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.023 [2024-10-08 18:35:59.314409] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.023 [2024-10-08 18:35:59.317015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.023 [2024-10-08 18:35:59.325706] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.023 [2024-10-08 18:35:59.326093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-10-08 18:35:59.326110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.023 [2024-10-08 18:35:59.326117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.023 [2024-10-08 18:35:59.326284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.023 [2024-10-08 18:35:59.326455] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.023 [2024-10-08 18:35:59.326464] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.024 [2024-10-08 18:35:59.326471] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.024 [2024-10-08 18:35:59.329063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.024 5905.40 IOPS, 23.07 MiB/s [2024-10-08T16:35:59.347Z] [2024-10-08 18:35:59.339028] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.024 [2024-10-08 18:35:59.339388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-10-08 18:35:59.339405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.024 [2024-10-08 18:35:59.339412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.024 [2024-10-08 18:35:59.339583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.024 [2024-10-08 18:35:59.339754] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.024 [2024-10-08 18:35:59.339762] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.024 [2024-10-08 18:35:59.339768] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.024 [2024-10-08 18:35:59.342497] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.283 [2024-10-08 18:35:59.351869] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.283 [2024-10-08 18:35:59.352306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.283 [2024-10-08 18:35:59.352322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.283 [2024-10-08 18:35:59.352329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.283 [2024-10-08 18:35:59.352500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.283 [2024-10-08 18:35:59.352667] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.283 [2024-10-08 18:35:59.352675] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.283 [2024-10-08 18:35:59.352682] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.283 [2024-10-08 18:35:59.355337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.283 [2024-10-08 18:35:59.364619] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.283 [2024-10-08 18:35:59.365041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.283 [2024-10-08 18:35:59.365084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.283 [2024-10-08 18:35:59.365107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.284 [2024-10-08 18:35:59.365545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.284 [2024-10-08 18:35:59.365714] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.284 [2024-10-08 18:35:59.365722] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.284 [2024-10-08 18:35:59.365728] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.284 [2024-10-08 18:35:59.368402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.284 [2024-10-08 18:35:59.377444] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.284 [2024-10-08 18:35:59.377863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.284 [2024-10-08 18:35:59.377878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.284 [2024-10-08 18:35:59.377885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.284 [2024-10-08 18:35:59.378042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.284 [2024-10-08 18:35:59.378200] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.284 [2024-10-08 18:35:59.378207] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.284 [2024-10-08 18:35:59.378213] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.284 [2024-10-08 18:35:59.380821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.284 [2024-10-08 18:35:59.390196] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.284 [2024-10-08 18:35:59.390540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.284 [2024-10-08 18:35:59.390583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.284 [2024-10-08 18:35:59.390606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.284 [2024-10-08 18:35:59.391184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.284 [2024-10-08 18:35:59.391626] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.284 [2024-10-08 18:35:59.391635] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.284 [2024-10-08 18:35:59.391642] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.284 [2024-10-08 18:35:59.394236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.284 [2024-10-08 18:35:59.402938] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.284 [2024-10-08 18:35:59.403303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.284 [2024-10-08 18:35:59.403318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.284 [2024-10-08 18:35:59.403330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.284 [2024-10-08 18:35:59.403499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.284 [2024-10-08 18:35:59.403666] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.284 [2024-10-08 18:35:59.403674] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.284 [2024-10-08 18:35:59.403680] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.284 [2024-10-08 18:35:59.406274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.284 [2024-10-08 18:35:59.415752] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.284 [2024-10-08 18:35:59.416100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.284 [2024-10-08 18:35:59.416116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.284 [2024-10-08 18:35:59.416123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.284 [2024-10-08 18:35:59.416289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.284 [2024-10-08 18:35:59.416461] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.284 [2024-10-08 18:35:59.416470] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.284 [2024-10-08 18:35:59.416477] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.284 [2024-10-08 18:35:59.419114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.284 [2024-10-08 18:35:59.428681] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.284 [2024-10-08 18:35:59.429017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.284 [2024-10-08 18:35:59.429033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.284 [2024-10-08 18:35:59.429040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.284 [2024-10-08 18:35:59.429207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.284 [2024-10-08 18:35:59.429373] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.284 [2024-10-08 18:35:59.429386] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.284 [2024-10-08 18:35:59.429395] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.284 [2024-10-08 18:35:59.432037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.284 [2024-10-08 18:35:59.441626] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.284 [2024-10-08 18:35:59.441897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.284 [2024-10-08 18:35:59.441912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.284 [2024-10-08 18:35:59.441920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.284 [2024-10-08 18:35:59.442086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.284 [2024-10-08 18:35:59.442252] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.284 [2024-10-08 18:35:59.442263] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.284 [2024-10-08 18:35:59.442269] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.284 [2024-10-08 18:35:59.444910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.284 [2024-10-08 18:35:59.454714] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.284 [2024-10-08 18:35:59.455049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.284 [2024-10-08 18:35:59.455068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.284 [2024-10-08 18:35:59.455076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.284 [2024-10-08 18:35:59.455247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.284 [2024-10-08 18:35:59.455423] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.284 [2024-10-08 18:35:59.455432] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.284 [2024-10-08 18:35:59.455439] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.284 [2024-10-08 18:35:59.458167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.284 [2024-10-08 18:35:59.467673] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.284 [2024-10-08 18:35:59.468081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.284 [2024-10-08 18:35:59.468096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.284 [2024-10-08 18:35:59.468103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.284 [2024-10-08 18:35:59.468270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.284 [2024-10-08 18:35:59.468440] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.284 [2024-10-08 18:35:59.468449] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.284 [2024-10-08 18:35:59.468455] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.284 [2024-10-08 18:35:59.471094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.284 [2024-10-08 18:35:59.480459] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.285 [2024-10-08 18:35:59.480830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.285 [2024-10-08 18:35:59.480873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.285 [2024-10-08 18:35:59.480897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.285 [2024-10-08 18:35:59.481427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.285 [2024-10-08 18:35:59.481595] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.285 [2024-10-08 18:35:59.481604] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.285 [2024-10-08 18:35:59.481610] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.285 [2024-10-08 18:35:59.484236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.285 [2024-10-08 18:35:59.493344] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.285 [2024-10-08 18:35:59.493785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.285 [2024-10-08 18:35:59.493828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.285 [2024-10-08 18:35:59.493851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.285 [2024-10-08 18:35:59.494443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.285 [2024-10-08 18:35:59.494869] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.285 [2024-10-08 18:35:59.494877] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.285 [2024-10-08 18:35:59.494883] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.285 [2024-10-08 18:35:59.497557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.285 [2024-10-08 18:35:59.506142] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.285 [2024-10-08 18:35:59.506485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.285 [2024-10-08 18:35:59.506503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.285 [2024-10-08 18:35:59.506510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.285 [2024-10-08 18:35:59.506682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.285 [2024-10-08 18:35:59.506853] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.285 [2024-10-08 18:35:59.506862] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.285 [2024-10-08 18:35:59.506868] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.285 [2024-10-08 18:35:59.509649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.285 [2024-10-08 18:35:59.519218] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.285 [2024-10-08 18:35:59.519702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.285 [2024-10-08 18:35:59.519718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.285 [2024-10-08 18:35:59.519726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.285 [2024-10-08 18:35:59.519891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.285 [2024-10-08 18:35:59.520059] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.285 [2024-10-08 18:35:59.520067] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.285 [2024-10-08 18:35:59.520073] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.285 [2024-10-08 18:35:59.522738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.285 [2024-10-08 18:35:59.531962] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.285 [2024-10-08 18:35:59.532405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.285 [2024-10-08 18:35:59.532449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.285 [2024-10-08 18:35:59.532474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.285 [2024-10-08 18:35:59.532902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.285 [2024-10-08 18:35:59.533070] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.285 [2024-10-08 18:35:59.533078] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.285 [2024-10-08 18:35:59.533084] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.285 [2024-10-08 18:35:59.535681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.285 [2024-10-08 18:35:59.544844] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.285 [2024-10-08 18:35:59.545211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.285 [2024-10-08 18:35:59.545227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.285 [2024-10-08 18:35:59.545234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.285 [2024-10-08 18:35:59.545411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.285 [2024-10-08 18:35:59.545583] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.285 [2024-10-08 18:35:59.545591] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.285 [2024-10-08 18:35:59.545597] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.285 [2024-10-08 18:35:59.548282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.285 [2024-10-08 18:35:59.557625] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.285 [2024-10-08 18:35:59.557981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.285 [2024-10-08 18:35:59.558024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.285 [2024-10-08 18:35:59.558047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.285 [2024-10-08 18:35:59.558638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.285 [2024-10-08 18:35:59.559112] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.285 [2024-10-08 18:35:59.559120] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.285 [2024-10-08 18:35:59.559126] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.285 [2024-10-08 18:35:59.561721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.285 [2024-10-08 18:35:59.570397] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.285 [2024-10-08 18:35:59.570827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.285 [2024-10-08 18:35:59.570870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.285 [2024-10-08 18:35:59.570893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.285 [2024-10-08 18:35:59.571483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.285 [2024-10-08 18:35:59.572028] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.285 [2024-10-08 18:35:59.572037] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.285 [2024-10-08 18:35:59.572047] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.285 [2024-10-08 18:35:59.574677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.285 [2024-10-08 18:35:59.583247] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.285 [2024-10-08 18:35:59.583618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.285 [2024-10-08 18:35:59.583634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.285 [2024-10-08 18:35:59.583641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.285 [2024-10-08 18:35:59.583807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.285 [2024-10-08 18:35:59.583973] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.285 [2024-10-08 18:35:59.583981] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.285 [2024-10-08 18:35:59.583987] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.285 [2024-10-08 18:35:59.586586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.285 [2024-10-08 18:35:59.596108] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.285 [2024-10-08 18:35:59.596569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.286 [2024-10-08 18:35:59.596585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.286 [2024-10-08 18:35:59.596593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.286 [2024-10-08 18:35:59.596759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.286 [2024-10-08 18:35:59.596925] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.286 [2024-10-08 18:35:59.596933] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.286 [2024-10-08 18:35:59.596940] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.286 [2024-10-08 18:35:59.599559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.546 [2024-10-08 18:35:59.608962] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.546 [2024-10-08 18:35:59.609238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.546 [2024-10-08 18:35:59.609254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.546 [2024-10-08 18:35:59.609261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.546 [2024-10-08 18:35:59.609449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.546 [2024-10-08 18:35:59.609621] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.546 [2024-10-08 18:35:59.609630] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.546 [2024-10-08 18:35:59.609636] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.546 [2024-10-08 18:35:59.612303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.546 [2024-10-08 18:35:59.621718] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.546 [2024-10-08 18:35:59.622000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.546 [2024-10-08 18:35:59.622015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.546 [2024-10-08 18:35:59.622022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.546 [2024-10-08 18:35:59.622188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.546 [2024-10-08 18:35:59.622354] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.546 [2024-10-08 18:35:59.622362] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.546 [2024-10-08 18:35:59.622368] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.546 [2024-10-08 18:35:59.625045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.546 [2024-10-08 18:35:59.634505] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.546 [2024-10-08 18:35:59.634796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.546 [2024-10-08 18:35:59.634811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.546 [2024-10-08 18:35:59.634818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.546 [2024-10-08 18:35:59.634975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.546 [2024-10-08 18:35:59.635133] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.546 [2024-10-08 18:35:59.635141] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.546 [2024-10-08 18:35:59.635147] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.546 [2024-10-08 18:35:59.637904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.546 [2024-10-08 18:35:59.647246] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.546 [2024-10-08 18:35:59.647590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.546 [2024-10-08 18:35:59.647607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.546 [2024-10-08 18:35:59.647614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.546 [2024-10-08 18:35:59.647781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.546 [2024-10-08 18:35:59.647947] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.546 [2024-10-08 18:35:59.647955] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.546 [2024-10-08 18:35:59.647962] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.546 [2024-10-08 18:35:59.650560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.546 [2024-10-08 18:35:59.660033] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.546 [2024-10-08 18:35:59.660427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.546 [2024-10-08 18:35:59.660443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.546 [2024-10-08 18:35:59.660451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.546 [2024-10-08 18:35:59.660617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.546 [2024-10-08 18:35:59.660787] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.546 [2024-10-08 18:35:59.660795] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.546 [2024-10-08 18:35:59.660801] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.546 [2024-10-08 18:35:59.663470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.546 [2024-10-08 18:35:59.672772] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.546 [2024-10-08 18:35:59.673033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.546 [2024-10-08 18:35:59.673048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.546 [2024-10-08 18:35:59.673056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.546 [2024-10-08 18:35:59.673221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.546 [2024-10-08 18:35:59.673393] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.546 [2024-10-08 18:35:59.673402] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.546 [2024-10-08 18:35:59.673408] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.546 [2024-10-08 18:35:59.676076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.546 [2024-10-08 18:35:59.685539] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.546 [2024-10-08 18:35:59.685877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.546 [2024-10-08 18:35:59.685893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.546 [2024-10-08 18:35:59.685900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.546 [2024-10-08 18:35:59.686066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.546 [2024-10-08 18:35:59.686231] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.546 [2024-10-08 18:35:59.686240] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.546 [2024-10-08 18:35:59.686246] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.546 [2024-10-08 18:35:59.688904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.546 [2024-10-08 18:35:59.698331] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.546 [2024-10-08 18:35:59.698673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.546 [2024-10-08 18:35:59.698688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.546 [2024-10-08 18:35:59.698695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.546 [2024-10-08 18:35:59.698862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.546 [2024-10-08 18:35:59.699027] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.546 [2024-10-08 18:35:59.699036] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.546 [2024-10-08 18:35:59.699042] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.546 [2024-10-08 18:35:59.701644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.546 [2024-10-08 18:35:59.711116] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.546 [2024-10-08 18:35:59.711399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.546 [2024-10-08 18:35:59.711415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.546 [2024-10-08 18:35:59.711423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.546 [2024-10-08 18:35:59.711589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.546 [2024-10-08 18:35:59.711755] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.546 [2024-10-08 18:35:59.711764] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.546 [2024-10-08 18:35:59.711770] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.546 [2024-10-08 18:35:59.714364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.546 [2024-10-08 18:35:59.723915] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.546 [2024-10-08 18:35:59.724178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.546 [2024-10-08 18:35:59.724193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.546 [2024-10-08 18:35:59.724200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.546 [2024-10-08 18:35:59.724357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.547 [2024-10-08 18:35:59.724521] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.547 [2024-10-08 18:35:59.724529] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.547 [2024-10-08 18:35:59.724535] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.547 [2024-10-08 18:35:59.727195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.547 [2024-10-08 18:35:59.736866] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.547 [2024-10-08 18:35:59.737203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.547 [2024-10-08 18:35:59.737218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.547 [2024-10-08 18:35:59.737225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.547 [2024-10-08 18:35:59.737396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.547 [2024-10-08 18:35:59.737564] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.547 [2024-10-08 18:35:59.737573] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.547 [2024-10-08 18:35:59.737579] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.547 [2024-10-08 18:35:59.740256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.547 [2024-10-08 18:35:59.749665] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.547 [2024-10-08 18:35:59.749959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.547 [2024-10-08 18:35:59.749975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.547 [2024-10-08 18:35:59.749986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.547 [2024-10-08 18:35:59.750159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.547 [2024-10-08 18:35:59.750331] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.547 [2024-10-08 18:35:59.750340] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.547 [2024-10-08 18:35:59.750347] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.547 [2024-10-08 18:35:59.753050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.547 [2024-10-08 18:35:59.762666] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.547 [2024-10-08 18:35:59.763028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.547 [2024-10-08 18:35:59.763043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.547 [2024-10-08 18:35:59.763050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.547 [2024-10-08 18:35:59.763221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.547 [2024-10-08 18:35:59.763397] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.547 [2024-10-08 18:35:59.763406] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.547 [2024-10-08 18:35:59.763413] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.547 [2024-10-08 18:35:59.766145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.547 [2024-10-08 18:35:59.775680] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.547 [2024-10-08 18:35:59.775970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.547 [2024-10-08 18:35:59.775986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.547 [2024-10-08 18:35:59.775993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.547 [2024-10-08 18:35:59.776164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.547 [2024-10-08 18:35:59.776334] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.547 [2024-10-08 18:35:59.776343] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.547 [2024-10-08 18:35:59.776349] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.547 [2024-10-08 18:35:59.779056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.547 [2024-10-08 18:35:59.788569] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.547 [2024-10-08 18:35:59.788918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.547 [2024-10-08 18:35:59.788934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.547 [2024-10-08 18:35:59.788941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.547 [2024-10-08 18:35:59.789107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.547 [2024-10-08 18:35:59.789276] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.547 [2024-10-08 18:35:59.789284] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.547 [2024-10-08 18:35:59.789290] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.547 [2024-10-08 18:35:59.791920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.547 [2024-10-08 18:35:59.801466] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.547 [2024-10-08 18:35:59.801736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.547 [2024-10-08 18:35:59.801752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.547 [2024-10-08 18:35:59.801759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.547 [2024-10-08 18:35:59.801926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.547 [2024-10-08 18:35:59.802092] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.547 [2024-10-08 18:35:59.802101] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.547 [2024-10-08 18:35:59.802107] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.547 [2024-10-08 18:35:59.804716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.547 [2024-10-08 18:35:59.814193] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.547 [2024-10-08 18:35:59.814487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.547 [2024-10-08 18:35:59.814503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.547 [2024-10-08 18:35:59.814510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.547 [2024-10-08 18:35:59.814677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.547 [2024-10-08 18:35:59.814845] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.547 [2024-10-08 18:35:59.814854] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.547 [2024-10-08 18:35:59.814861] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.547 [2024-10-08 18:35:59.817511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.547 [2024-10-08 18:35:59.827035] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.547 [2024-10-08 18:35:59.827371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.547 [2024-10-08 18:35:59.827392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.547 [2024-10-08 18:35:59.827399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.547 [2024-10-08 18:35:59.827565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.547 [2024-10-08 18:35:59.827731] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.547 [2024-10-08 18:35:59.827739] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.547 [2024-10-08 18:35:59.827745] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.547 [2024-10-08 18:35:59.830339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.547 [2024-10-08 18:35:59.839978] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.547 [2024-10-08 18:35:59.840387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.547 [2024-10-08 18:35:59.840402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.547 [2024-10-08 18:35:59.840409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.547 [2024-10-08 18:35:59.840575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.547 [2024-10-08 18:35:59.840741] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.547 [2024-10-08 18:35:59.840749] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.547 [2024-10-08 18:35:59.840756] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.547 [2024-10-08 18:35:59.843384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.547 [2024-10-08 18:35:59.852758] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.547 [2024-10-08 18:35:59.853167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.547 [2024-10-08 18:35:59.853182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.547 [2024-10-08 18:35:59.853190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.547 [2024-10-08 18:35:59.853356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.547 [2024-10-08 18:35:59.853529] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.547 [2024-10-08 18:35:59.853537] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.547 [2024-10-08 18:35:59.853543] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.547 [2024-10-08 18:35:59.856136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.547 [2024-10-08 18:35:59.865700] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.547 [2024-10-08 18:35:59.866133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.548 [2024-10-08 18:35:59.866176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.548 [2024-10-08 18:35:59.866200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.808 [2024-10-08 18:35:59.866669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.808 [2024-10-08 18:35:59.866843] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.808 [2024-10-08 18:35:59.866852] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.808 [2024-10-08 18:35:59.866858] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.808 [2024-10-08 18:35:59.869519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.808 [2024-10-08 18:35:59.878618] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.808 [2024-10-08 18:35:59.879031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.808 [2024-10-08 18:35:59.879047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.808 [2024-10-08 18:35:59.879058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.808 [2024-10-08 18:35:59.879229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.808 [2024-10-08 18:35:59.879409] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.808 [2024-10-08 18:35:59.879419] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.808 [2024-10-08 18:35:59.879425] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.808 [2024-10-08 18:35:59.882034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.808 [2024-10-08 18:35:59.891425] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.808 [2024-10-08 18:35:59.891845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.808 [2024-10-08 18:35:59.891889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.808 [2024-10-08 18:35:59.891912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.808 [2024-10-08 18:35:59.892502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.808 [2024-10-08 18:35:59.892930] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.808 [2024-10-08 18:35:59.892938] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.808 [2024-10-08 18:35:59.892945] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.808 [2024-10-08 18:35:59.895581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.808 [2024-10-08 18:35:59.904206] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.808 [2024-10-08 18:35:59.904623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.808 [2024-10-08 18:35:59.904639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.808 [2024-10-08 18:35:59.904647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.808 [2024-10-08 18:35:59.904813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.808 [2024-10-08 18:35:59.904981] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.808 [2024-10-08 18:35:59.904990] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.808 [2024-10-08 18:35:59.904996] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.808 [2024-10-08 18:35:59.907612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.808 [2024-10-08 18:35:59.917075] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.808 [2024-10-08 18:35:59.917497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.808 [2024-10-08 18:35:59.917543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.808 [2024-10-08 18:35:59.917566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.808 [2024-10-08 18:35:59.918145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.808 [2024-10-08 18:35:59.918737] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.808 [2024-10-08 18:35:59.918781] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.808 [2024-10-08 18:35:59.918788] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.808 [2024-10-08 18:35:59.921384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.808 [2024-10-08 18:35:59.929912] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.808 [2024-10-08 18:35:59.930264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.808 [2024-10-08 18:35:59.930307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.808 [2024-10-08 18:35:59.930331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.808 [2024-10-08 18:35:59.930816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.808 [2024-10-08 18:35:59.930983] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.808 [2024-10-08 18:35:59.930991] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.808 [2024-10-08 18:35:59.930997] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.808 [2024-10-08 18:35:59.933590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.808 [2024-10-08 18:35:59.942710] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.808 [2024-10-08 18:35:59.943123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.808 [2024-10-08 18:35:59.943139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.809 [2024-10-08 18:35:59.943146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.809 [2024-10-08 18:35:59.943312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.809 [2024-10-08 18:35:59.943485] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.809 [2024-10-08 18:35:59.943494] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.809 [2024-10-08 18:35:59.943500] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.809 [2024-10-08 18:35:59.946095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 576576 Killed "${NVMF_APP[@]}" "$@" 00:28:06.809 18:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:06.809 18:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:06.809 18:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:06.809 18:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:06.809 18:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:06.809 [2024-10-08 18:35:59.955754] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.809 [2024-10-08 18:35:59.956159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.809 [2024-10-08 18:35:59.956174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.809 [2024-10-08 18:35:59.956182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.809 [2024-10-08 18:35:59.956352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.809 [2024-10-08 18:35:59.956532] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.809 [2024-10-08 18:35:59.956541] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.809 [2024-10-08 18:35:59.956548] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.809 18:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=577989 00:28:06.809 18:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 577989 00:28:06.809 18:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:06.809 18:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 577989 ']' 00:28:06.809 18:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.809 18:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:06.809 18:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.809 18:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:06.809 18:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:06.809 [2024-10-08 18:35:59.959276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.809 [2024-10-08 18:35:59.968823] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.809 [2024-10-08 18:35:59.969255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.809 [2024-10-08 18:35:59.969270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.809 [2024-10-08 18:35:59.969277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.809 [2024-10-08 18:35:59.969454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.809 [2024-10-08 18:35:59.969626] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.809 [2024-10-08 18:35:59.969636] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.809 [2024-10-08 18:35:59.969643] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.809 [2024-10-08 18:35:59.972374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.809 [2024-10-08 18:35:59.981908] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.809 [2024-10-08 18:35:59.982312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.809 [2024-10-08 18:35:59.982329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.809 [2024-10-08 18:35:59.982336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.809 [2024-10-08 18:35:59.982514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.809 [2024-10-08 18:35:59.982685] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.809 [2024-10-08 18:35:59.982693] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.809 [2024-10-08 18:35:59.982700] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.809 [2024-10-08 18:35:59.985439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.809 [2024-10-08 18:35:59.994915] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.809 [2024-10-08 18:35:59.995282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.809 [2024-10-08 18:35:59.995299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.809 [2024-10-08 18:35:59.995306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.809 [2024-10-08 18:35:59.995483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.809 [2024-10-08 18:35:59.995664] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.809 [2024-10-08 18:35:59.995672] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.809 [2024-10-08 18:35:59.995678] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.809 [2024-10-08 18:35:59.998334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.809 [2024-10-08 18:36:00.006474] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:28:06.809 [2024-10-08 18:36:00.006514] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.809 [2024-10-08 18:36:00.007874] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.809 [2024-10-08 18:36:00.008300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.809 [2024-10-08 18:36:00.008317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.809 [2024-10-08 18:36:00.008325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.809 [2024-10-08 18:36:00.008504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.809 [2024-10-08 18:36:00.008678] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.809 [2024-10-08 18:36:00.008687] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.809 [2024-10-08 18:36:00.008694] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.809 [2024-10-08 18:36:00.011437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.809 [2024-10-08 18:36:00.021078] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.809 [2024-10-08 18:36:00.021456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.809 [2024-10-08 18:36:00.021475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.809 [2024-10-08 18:36:00.021484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.809 [2024-10-08 18:36:00.021701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.809 [2024-10-08 18:36:00.021910] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.809 [2024-10-08 18:36:00.021921] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.809 [2024-10-08 18:36:00.021929] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.809 [2024-10-08 18:36:00.025433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.809 [2024-10-08 18:36:00.034184] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.809 [2024-10-08 18:36:00.034619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.809 [2024-10-08 18:36:00.034637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.809 [2024-10-08 18:36:00.034646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.809 [2024-10-08 18:36:00.034819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.809 [2024-10-08 18:36:00.034992] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.809 [2024-10-08 18:36:00.035001] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.809 [2024-10-08 18:36:00.035010] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.809 [2024-10-08 18:36:00.037749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.809 [2024-10-08 18:36:00.047138] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.809 [2024-10-08 18:36:00.047477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.809 [2024-10-08 18:36:00.047494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.809 [2024-10-08 18:36:00.047502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.809 [2024-10-08 18:36:00.047673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.809 [2024-10-08 18:36:00.047845] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.809 [2024-10-08 18:36:00.047854] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.809 [2024-10-08 18:36:00.047860] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.809 [2024-10-08 18:36:00.050598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.809 [2024-10-08 18:36:00.060354] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.809 [2024-10-08 18:36:00.060786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.809 [2024-10-08 18:36:00.060802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.810 [2024-10-08 18:36:00.060810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.810 [2024-10-08 18:36:00.060982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.810 [2024-10-08 18:36:00.061153] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.810 [2024-10-08 18:36:00.061162] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.810 [2024-10-08 18:36:00.061169] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.810 [2024-10-08 18:36:00.063904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.810 [2024-10-08 18:36:00.073426] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.810 [2024-10-08 18:36:00.073787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.810 [2024-10-08 18:36:00.073804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.810 [2024-10-08 18:36:00.073812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.810 [2024-10-08 18:36:00.073987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.810 [2024-10-08 18:36:00.074158] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.810 [2024-10-08 18:36:00.074167] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.810 [2024-10-08 18:36:00.074174] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.810 [2024-10-08 18:36:00.076916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.810 [2024-10-08 18:36:00.083944] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:06.810 [2024-10-08 18:36:00.086457] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.810 [2024-10-08 18:36:00.086886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.810 [2024-10-08 18:36:00.086903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.810 [2024-10-08 18:36:00.086910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.810 [2024-10-08 18:36:00.087083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.810 [2024-10-08 18:36:00.087255] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.810 [2024-10-08 18:36:00.087264] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.810 [2024-10-08 18:36:00.087270] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.810 [2024-10-08 18:36:00.090021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.810 [2024-10-08 18:36:00.099555] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.810 [2024-10-08 18:36:00.099968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.810 [2024-10-08 18:36:00.099985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.810 [2024-10-08 18:36:00.099993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.810 [2024-10-08 18:36:00.100165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.810 [2024-10-08 18:36:00.100337] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.810 [2024-10-08 18:36:00.100346] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.810 [2024-10-08 18:36:00.100353] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.810 [2024-10-08 18:36:00.103091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.810 [2024-10-08 18:36:00.112637] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.810 [2024-10-08 18:36:00.113062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.810 [2024-10-08 18:36:00.113078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.810 [2024-10-08 18:36:00.113087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.810 [2024-10-08 18:36:00.113255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.810 [2024-10-08 18:36:00.113443] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.810 [2024-10-08 18:36:00.113452] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.810 [2024-10-08 18:36:00.113464] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.810 [2024-10-08 18:36:00.116191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.810 [2024-10-08 18:36:00.125755] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.810 [2024-10-08 18:36:00.126105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.810 [2024-10-08 18:36:00.126123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:06.810 [2024-10-08 18:36:00.126132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:06.810 [2024-10-08 18:36:00.126305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:06.810 [2024-10-08 18:36:00.126483] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.810 [2024-10-08 18:36:00.126493] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.810 [2024-10-08 18:36:00.126501] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.069 [2024-10-08 18:36:00.129235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.069 [2024-10-08 18:36:00.138769] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.069 [2024-10-08 18:36:00.139147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-10-08 18:36:00.139163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.069 [2024-10-08 18:36:00.139171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.070 [2024-10-08 18:36:00.139343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.070 [2024-10-08 18:36:00.139519] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.070 [2024-10-08 18:36:00.139528] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.070 [2024-10-08 18:36:00.139535] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.070 [2024-10-08 18:36:00.142277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.070 [2024-10-08 18:36:00.151814] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.070 [2024-10-08 18:36:00.152240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-10-08 18:36:00.152262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.070 [2024-10-08 18:36:00.152270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.070 [2024-10-08 18:36:00.152446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.070 [2024-10-08 18:36:00.152618] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.070 [2024-10-08 18:36:00.152627] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.070 [2024-10-08 18:36:00.152634] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.070 [2024-10-08 18:36:00.155348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.070 [2024-10-08 18:36:00.160180] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.070 [2024-10-08 18:36:00.160210] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.070 [2024-10-08 18:36:00.160217] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:07.070 [2024-10-08 18:36:00.160223] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:07.070 [2024-10-08 18:36:00.160228] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.070 [2024-10-08 18:36:00.161191] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:07.070 [2024-10-08 18:36:00.161303] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.070 [2024-10-08 18:36:00.161305] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:07.070 [2024-10-08 18:36:00.164901] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.070 [2024-10-08 18:36:00.165331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-10-08 18:36:00.165349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.070 [2024-10-08 18:36:00.165357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.070 [2024-10-08 18:36:00.165535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.070 [2024-10-08 18:36:00.165708] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.070 [2024-10-08 18:36:00.165717] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.070 [2024-10-08 18:36:00.165724] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.070 [2024-10-08 18:36:00.168460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.070 [2024-10-08 18:36:00.177976] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.070 [2024-10-08 18:36:00.178422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-10-08 18:36:00.178442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.070 [2024-10-08 18:36:00.178450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.070 [2024-10-08 18:36:00.178623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.070 [2024-10-08 18:36:00.178795] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.070 [2024-10-08 18:36:00.178803] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.070 [2024-10-08 18:36:00.178810] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.070 [2024-10-08 18:36:00.181547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.070 [2024-10-08 18:36:00.190933] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.070 [2024-10-08 18:36:00.191365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-10-08 18:36:00.191388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.070 [2024-10-08 18:36:00.191397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.070 [2024-10-08 18:36:00.191570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.070 [2024-10-08 18:36:00.191742] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.070 [2024-10-08 18:36:00.191751] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.070 [2024-10-08 18:36:00.191769] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.070 [2024-10-08 18:36:00.194501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.070 [2024-10-08 18:36:00.204022] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.070 [2024-10-08 18:36:00.204433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-10-08 18:36:00.204453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.070 [2024-10-08 18:36:00.204462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.070 [2024-10-08 18:36:00.204635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.070 [2024-10-08 18:36:00.204811] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.070 [2024-10-08 18:36:00.204820] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.070 [2024-10-08 18:36:00.204828] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.070 [2024-10-08 18:36:00.207564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.070 [2024-10-08 18:36:00.217088] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.070 [2024-10-08 18:36:00.217538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-10-08 18:36:00.217557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.070 [2024-10-08 18:36:00.217567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.070 [2024-10-08 18:36:00.217739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.070 [2024-10-08 18:36:00.217911] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.070 [2024-10-08 18:36:00.217920] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.070 [2024-10-08 18:36:00.217927] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.070 [2024-10-08 18:36:00.220662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.070 [2024-10-08 18:36:00.230168] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.070 [2024-10-08 18:36:00.230589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-10-08 18:36:00.230606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.070 [2024-10-08 18:36:00.230614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.070 [2024-10-08 18:36:00.230786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.070 [2024-10-08 18:36:00.230958] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.070 [2024-10-08 18:36:00.230966] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.070 [2024-10-08 18:36:00.230973] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.070 [2024-10-08 18:36:00.233705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.070 [2024-10-08 18:36:00.243237] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.070 [2024-10-08 18:36:00.243712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-10-08 18:36:00.243728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.070 [2024-10-08 18:36:00.243735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.070 [2024-10-08 18:36:00.243907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.070 [2024-10-08 18:36:00.244078] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.070 [2024-10-08 18:36:00.244087] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.070 [2024-10-08 18:36:00.244093] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.070 [2024-10-08 18:36:00.246827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.070 [2024-10-08 18:36:00.256336] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.070 [2024-10-08 18:36:00.256750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-10-08 18:36:00.256766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.070 [2024-10-08 18:36:00.256774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.070 [2024-10-08 18:36:00.256945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.070 [2024-10-08 18:36:00.257116] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.070 [2024-10-08 18:36:00.257124] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.070 [2024-10-08 18:36:00.257131] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.070 [2024-10-08 18:36:00.259860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.071 [2024-10-08 18:36:00.269392] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.071 [2024-10-08 18:36:00.269800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-08 18:36:00.269816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.071 [2024-10-08 18:36:00.269823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.071 [2024-10-08 18:36:00.269995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.071 [2024-10-08 18:36:00.270166] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.071 [2024-10-08 18:36:00.270174] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.071 [2024-10-08 18:36:00.270180] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.071 [2024-10-08 18:36:00.272916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.071 [2024-10-08 18:36:00.282430] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.071 [2024-10-08 18:36:00.282883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-08 18:36:00.282899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.071 [2024-10-08 18:36:00.282908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.071 [2024-10-08 18:36:00.283083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.071 [2024-10-08 18:36:00.283256] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.071 [2024-10-08 18:36:00.283264] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.071 [2024-10-08 18:36:00.283271] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.071 [2024-10-08 18:36:00.286007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.071 [2024-10-08 18:36:00.295380] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.071 [2024-10-08 18:36:00.295741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-08 18:36:00.295757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.071 [2024-10-08 18:36:00.295765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.071 [2024-10-08 18:36:00.295935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.071 [2024-10-08 18:36:00.296106] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.071 [2024-10-08 18:36:00.296115] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.071 [2024-10-08 18:36:00.296121] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.071 [2024-10-08 18:36:00.298853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.071 [2024-10-08 18:36:00.308379] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.071 [2024-10-08 18:36:00.308783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-08 18:36:00.308799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.071 [2024-10-08 18:36:00.308806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.071 [2024-10-08 18:36:00.308978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.071 [2024-10-08 18:36:00.309149] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.071 [2024-10-08 18:36:00.309157] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.071 [2024-10-08 18:36:00.309164] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.071 [2024-10-08 18:36:00.311899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.071 [2024-10-08 18:36:00.321413] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.071 [2024-10-08 18:36:00.321767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-08 18:36:00.321783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.071 [2024-10-08 18:36:00.321791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.071 [2024-10-08 18:36:00.321962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.071 [2024-10-08 18:36:00.322139] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.071 [2024-10-08 18:36:00.322148] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.071 [2024-10-08 18:36:00.322158] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.071 [2024-10-08 18:36:00.324893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.071 [2024-10-08 18:36:00.334407] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.071 [2024-10-08 18:36:00.334697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-08 18:36:00.334712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.071 [2024-10-08 18:36:00.334720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.071 [2024-10-08 18:36:00.334890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.071 [2024-10-08 18:36:00.335062] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.071 [2024-10-08 18:36:00.335071] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.071 [2024-10-08 18:36:00.335077] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.071 [2024-10-08 18:36:00.339061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.071 4921.17 IOPS, 19.22 MiB/s [2024-10-08T16:36:00.394Z] [2024-10-08 18:36:00.347477] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.071 [2024-10-08 18:36:00.347881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-08 18:36:00.347898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.071 [2024-10-08 18:36:00.347906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.071 [2024-10-08 18:36:00.348077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.071 [2024-10-08 18:36:00.348248] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.071 [2024-10-08 18:36:00.348257] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.071 [2024-10-08 18:36:00.348263] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.071 [2024-10-08 18:36:00.350998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.071 [2024-10-08 18:36:00.360517] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.071 [2024-10-08 18:36:00.360903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-08 18:36:00.360919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.071 [2024-10-08 18:36:00.360926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.071 [2024-10-08 18:36:00.361098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.071 [2024-10-08 18:36:00.361269] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.071 [2024-10-08 18:36:00.361277] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.071 [2024-10-08 18:36:00.361284] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.071 [2024-10-08 18:36:00.364024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.071 [2024-10-08 18:36:00.373548] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.071 [2024-10-08 18:36:00.374000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-08 18:36:00.374019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.071 [2024-10-08 18:36:00.374027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.071 [2024-10-08 18:36:00.374199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.071 [2024-10-08 18:36:00.374371] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.071 [2024-10-08 18:36:00.374386] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.071 [2024-10-08 18:36:00.374393] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.071 [2024-10-08 18:36:00.377120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.071 [2024-10-08 18:36:00.386631] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.071 [2024-10-08 18:36:00.387015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-08 18:36:00.387031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.071 [2024-10-08 18:36:00.387039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.071 [2024-10-08 18:36:00.387210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.071 [2024-10-08 18:36:00.387387] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.071 [2024-10-08 18:36:00.387396] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.071 [2024-10-08 18:36:00.387403] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.071 [2024-10-08 18:36:00.390156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.331 [2024-10-08 18:36:00.399674] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.331 [2024-10-08 18:36:00.400091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.331 [2024-10-08 18:36:00.400108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.331 [2024-10-08 18:36:00.400116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.331 [2024-10-08 18:36:00.400287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.331 [2024-10-08 18:36:00.400463] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.331 [2024-10-08 18:36:00.400472] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.331 [2024-10-08 18:36:00.400478] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.331 [2024-10-08 18:36:00.403206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.331 [2024-10-08 18:36:00.412723] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.331 [2024-10-08 18:36:00.413057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.331 [2024-10-08 18:36:00.413073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.331 [2024-10-08 18:36:00.413081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.331 [2024-10-08 18:36:00.413251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.331 [2024-10-08 18:36:00.413435] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.331 [2024-10-08 18:36:00.413444] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.331 [2024-10-08 18:36:00.413450] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.331 [2024-10-08 18:36:00.416180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.331 [2024-10-08 18:36:00.425707] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.331 [2024-10-08 18:36:00.426133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.331 [2024-10-08 18:36:00.426149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.331 [2024-10-08 18:36:00.426157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.331 [2024-10-08 18:36:00.426327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.331 [2024-10-08 18:36:00.426503] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.331 [2024-10-08 18:36:00.426512] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.331 [2024-10-08 18:36:00.426519] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.331 [2024-10-08 18:36:00.429246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.331 [2024-10-08 18:36:00.438764] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.331 [2024-10-08 18:36:00.439196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.331 [2024-10-08 18:36:00.439212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.331 [2024-10-08 18:36:00.439219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.331 [2024-10-08 18:36:00.439396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.331 [2024-10-08 18:36:00.439567] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.331 [2024-10-08 18:36:00.439576] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.331 [2024-10-08 18:36:00.439582] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.331 [2024-10-08 18:36:00.442319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.331 [2024-10-08 18:36:00.451831] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.331 [2024-10-08 18:36:00.452261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.331 [2024-10-08 18:36:00.452277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.331 [2024-10-08 18:36:00.452284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.331 [2024-10-08 18:36:00.452459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.331 [2024-10-08 18:36:00.452631] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.331 [2024-10-08 18:36:00.452640] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.331 [2024-10-08 18:36:00.452646] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.331 [2024-10-08 18:36:00.455382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.331 [2024-10-08 18:36:00.464885] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.331 [2024-10-08 18:36:00.465310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.331 [2024-10-08 18:36:00.465326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.331 [2024-10-08 18:36:00.465334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.331 [2024-10-08 18:36:00.465509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.331 [2024-10-08 18:36:00.465682] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.331 [2024-10-08 18:36:00.465690] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.331 [2024-10-08 18:36:00.465697] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.331 [2024-10-08 18:36:00.468426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.331 [2024-10-08 18:36:00.477931] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.331 [2024-10-08 18:36:00.478283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.331 [2024-10-08 18:36:00.478299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.331 [2024-10-08 18:36:00.478307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.331 [2024-10-08 18:36:00.478483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.331 [2024-10-08 18:36:00.478655] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.331 [2024-10-08 18:36:00.478663] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.331 [2024-10-08 18:36:00.478670] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.331 [2024-10-08 18:36:00.481399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.331 [2024-10-08 18:36:00.490909] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.331 [2024-10-08 18:36:00.491261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.331 [2024-10-08 18:36:00.491277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.331 [2024-10-08 18:36:00.491285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.331 [2024-10-08 18:36:00.491461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.331 [2024-10-08 18:36:00.491633] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.331 [2024-10-08 18:36:00.491641] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.331 [2024-10-08 18:36:00.491648] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.331 [2024-10-08 18:36:00.494379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.331 [2024-10-08 18:36:00.503897] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.331 [2024-10-08 18:36:00.504301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.331 [2024-10-08 18:36:00.504317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.331 [2024-10-08 18:36:00.504328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.331 [2024-10-08 18:36:00.504504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.331 [2024-10-08 18:36:00.504676] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.331 [2024-10-08 18:36:00.504684] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.332 [2024-10-08 18:36:00.504691] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.332 [2024-10-08 18:36:00.507421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.332 [2024-10-08 18:36:00.516931] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.332 [2024-10-08 18:36:00.517360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.332 [2024-10-08 18:36:00.517380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.332 [2024-10-08 18:36:00.517388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.332 [2024-10-08 18:36:00.517559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.332 [2024-10-08 18:36:00.517730] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.332 [2024-10-08 18:36:00.517739] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.332 [2024-10-08 18:36:00.517745] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.332 [2024-10-08 18:36:00.520478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.332 [2024-10-08 18:36:00.529985] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.332 [2024-10-08 18:36:00.530350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.332 [2024-10-08 18:36:00.530366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.332 [2024-10-08 18:36:00.530374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.332 [2024-10-08 18:36:00.530549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.332 [2024-10-08 18:36:00.530720] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.332 [2024-10-08 18:36:00.530729] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.332 [2024-10-08 18:36:00.530735] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.332 [2024-10-08 18:36:00.533466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.332 [2024-10-08 18:36:00.542981] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.332 [2024-10-08 18:36:00.543559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.332 [2024-10-08 18:36:00.543577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.332 [2024-10-08 18:36:00.543585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.332 [2024-10-08 18:36:00.543758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.332 [2024-10-08 18:36:00.543929] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.332 [2024-10-08 18:36:00.543941] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.332 [2024-10-08 18:36:00.543948] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.332 [2024-10-08 18:36:00.546681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.332 [2024-10-08 18:36:00.556029] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.332 [2024-10-08 18:36:00.556459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.332 [2024-10-08 18:36:00.556475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.332 [2024-10-08 18:36:00.556483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.332 [2024-10-08 18:36:00.556655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.332 [2024-10-08 18:36:00.556827] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.332 [2024-10-08 18:36:00.556835] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.332 [2024-10-08 18:36:00.556842] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.332 [2024-10-08 18:36:00.559578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.332 [2024-10-08 18:36:00.569088] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.332 [2024-10-08 18:36:00.569493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.332 [2024-10-08 18:36:00.569510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.332 [2024-10-08 18:36:00.569517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.332 [2024-10-08 18:36:00.569688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.332 [2024-10-08 18:36:00.569859] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.332 [2024-10-08 18:36:00.569867] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.332 [2024-10-08 18:36:00.569874] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.332 [2024-10-08 18:36:00.572607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.332 [2024-10-08 18:36:00.582112] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.332 [2024-10-08 18:36:00.582465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.332 [2024-10-08 18:36:00.582482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.332 [2024-10-08 18:36:00.582489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.332 [2024-10-08 18:36:00.582661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.332 [2024-10-08 18:36:00.582832] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.332 [2024-10-08 18:36:00.582841] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.332 [2024-10-08 18:36:00.582847] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.332 [2024-10-08 18:36:00.585579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.332 [2024-10-08 18:36:00.595108] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.332 [2024-10-08 18:36:00.595445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.332 [2024-10-08 18:36:00.595461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.332 [2024-10-08 18:36:00.595469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.332 [2024-10-08 18:36:00.595640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.332 [2024-10-08 18:36:00.595811] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.332 [2024-10-08 18:36:00.595819] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.332 [2024-10-08 18:36:00.595826] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.332 [2024-10-08 18:36:00.598558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.332 [2024-10-08 18:36:00.608104] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.332 [2024-10-08 18:36:00.608456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.332 [2024-10-08 18:36:00.608472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.332 [2024-10-08 18:36:00.608480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.332 [2024-10-08 18:36:00.608652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.332 [2024-10-08 18:36:00.608823] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.332 [2024-10-08 18:36:00.608831] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.332 [2024-10-08 18:36:00.608838] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.332 [2024-10-08 18:36:00.611573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.332 [2024-10-08 18:36:00.621082] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.332 [2024-10-08 18:36:00.621471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.332 [2024-10-08 18:36:00.621489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.332 [2024-10-08 18:36:00.621496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.332 [2024-10-08 18:36:00.621668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.332 [2024-10-08 18:36:00.621839] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.332 [2024-10-08 18:36:00.621847] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.332 [2024-10-08 18:36:00.621854] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.332 [2024-10-08 18:36:00.624587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.332 [2024-10-08 18:36:00.634117] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.332 [2024-10-08 18:36:00.634523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.332 [2024-10-08 18:36:00.634540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.332 [2024-10-08 18:36:00.634548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.332 [2024-10-08 18:36:00.634722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.332 [2024-10-08 18:36:00.634895] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.332 [2024-10-08 18:36:00.634903] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.332 [2024-10-08 18:36:00.634910] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.332 [2024-10-08 18:36:00.637830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.332 [2024-10-08 18:36:00.647050] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.332 [2024-10-08 18:36:00.647456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.332 [2024-10-08 18:36:00.647475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.332 [2024-10-08 18:36:00.647483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.332 [2024-10-08 18:36:00.647656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.333 [2024-10-08 18:36:00.647828] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.333 [2024-10-08 18:36:00.647836] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.333 [2024-10-08 18:36:00.647843] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.333 [2024-10-08 18:36:00.650580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.593 [2024-10-08 18:36:00.660090] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.593 [2024-10-08 18:36:00.660495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.593 [2024-10-08 18:36:00.660512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.593 [2024-10-08 18:36:00.660520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.593 [2024-10-08 18:36:00.660691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.593 [2024-10-08 18:36:00.660862] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.593 [2024-10-08 18:36:00.660871] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.593 [2024-10-08 18:36:00.660877] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.593 [2024-10-08 18:36:00.663611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.593 [2024-10-08 18:36:00.673138] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.593 [2024-10-08 18:36:00.673486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.593 [2024-10-08 18:36:00.673503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.593 [2024-10-08 18:36:00.673511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.593 [2024-10-08 18:36:00.673682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.593 [2024-10-08 18:36:00.673853] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.593 [2024-10-08 18:36:00.673862] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.593 [2024-10-08 18:36:00.673873] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.593 [2024-10-08 18:36:00.676606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.593 [2024-10-08 18:36:00.686118] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.593 [2024-10-08 18:36:00.686527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.593 [2024-10-08 18:36:00.686544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.593 [2024-10-08 18:36:00.686552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.593 [2024-10-08 18:36:00.686723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.593 [2024-10-08 18:36:00.686895] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.593 [2024-10-08 18:36:00.686906] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.593 [2024-10-08 18:36:00.686913] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.594 [2024-10-08 18:36:00.689655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.594 [2024-10-08 18:36:00.699170] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.594 [2024-10-08 18:36:00.699549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.594 [2024-10-08 18:36:00.699565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.594 [2024-10-08 18:36:00.699573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.594 [2024-10-08 18:36:00.699744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.594 [2024-10-08 18:36:00.699915] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.594 [2024-10-08 18:36:00.699924] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.594 [2024-10-08 18:36:00.699931] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.594 [2024-10-08 18:36:00.702661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.594 [2024-10-08 18:36:00.712182] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.594 [2024-10-08 18:36:00.712610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.594 [2024-10-08 18:36:00.712626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.594 [2024-10-08 18:36:00.712634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.594 [2024-10-08 18:36:00.712805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.594 [2024-10-08 18:36:00.712976] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.594 [2024-10-08 18:36:00.712985] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.594 [2024-10-08 18:36:00.712991] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.594 [2024-10-08 18:36:00.715726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.594 [2024-10-08 18:36:00.725249] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.594 [2024-10-08 18:36:00.725665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.594 [2024-10-08 18:36:00.725681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.594 [2024-10-08 18:36:00.725688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.594 [2024-10-08 18:36:00.725859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.594 [2024-10-08 18:36:00.726030] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.594 [2024-10-08 18:36:00.726038] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.594 [2024-10-08 18:36:00.726045] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.594 [2024-10-08 18:36:00.728778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.594 [2024-10-08 18:36:00.738297] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.594 [2024-10-08 18:36:00.738709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.594 [2024-10-08 18:36:00.738725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.594 [2024-10-08 18:36:00.738732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.594 [2024-10-08 18:36:00.738913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.594 [2024-10-08 18:36:00.739086] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.594 [2024-10-08 18:36:00.739094] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.594 [2024-10-08 18:36:00.739101] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.594 [2024-10-08 18:36:00.741833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.594 [2024-10-08 18:36:00.751349] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.594 [2024-10-08 18:36:00.751701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.594 [2024-10-08 18:36:00.751717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.594 [2024-10-08 18:36:00.751725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.594 [2024-10-08 18:36:00.751896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.594 [2024-10-08 18:36:00.752067] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.594 [2024-10-08 18:36:00.752077] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.594 [2024-10-08 18:36:00.752084] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.594 [2024-10-08 18:36:00.754817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.594 [2024-10-08 18:36:00.764337] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.594 [2024-10-08 18:36:00.764670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.594 [2024-10-08 18:36:00.764686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.594 [2024-10-08 18:36:00.764693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.594 [2024-10-08 18:36:00.764868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.594 [2024-10-08 18:36:00.765040] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.594 [2024-10-08 18:36:00.765049] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.594 [2024-10-08 18:36:00.765059] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.594 [2024-10-08 18:36:00.767798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.594 [2024-10-08 18:36:00.777316] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.594 [2024-10-08 18:36:00.777684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.594 [2024-10-08 18:36:00.777701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.594 [2024-10-08 18:36:00.777710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.594 [2024-10-08 18:36:00.777881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.594 [2024-10-08 18:36:00.778054] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.594 [2024-10-08 18:36:00.778063] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.594 [2024-10-08 18:36:00.778070] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.594 [2024-10-08 18:36:00.780809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.594 [2024-10-08 18:36:00.790334] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.594 [2024-10-08 18:36:00.790683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.594 [2024-10-08 18:36:00.790699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.594 [2024-10-08 18:36:00.790707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.594 [2024-10-08 18:36:00.790879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.594 [2024-10-08 18:36:00.791051] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.594 [2024-10-08 18:36:00.791060] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.594 [2024-10-08 18:36:00.791066] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.594 [2024-10-08 18:36:00.793798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.594 [2024-10-08 18:36:00.803365] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.594 [2024-10-08 18:36:00.803734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.594 [2024-10-08 18:36:00.803751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.594 [2024-10-08 18:36:00.803760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.594 [2024-10-08 18:36:00.803931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.594 [2024-10-08 18:36:00.804104] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.594 [2024-10-08 18:36:00.804113] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.594 [2024-10-08 18:36:00.804123] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.594 [2024-10-08 18:36:00.806863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.594 [2024-10-08 18:36:00.816385] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.594 [2024-10-08 18:36:00.816656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.594 [2024-10-08 18:36:00.816673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.594 [2024-10-08 18:36:00.816682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.594 [2024-10-08 18:36:00.816853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.594 [2024-10-08 18:36:00.817025] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.594 [2024-10-08 18:36:00.817034] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.594 [2024-10-08 18:36:00.817040] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.594 [2024-10-08 18:36:00.819776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.594 [2024-10-08 18:36:00.829471] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.594 [2024-10-08 18:36:00.829836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.594 [2024-10-08 18:36:00.829852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.594 [2024-10-08 18:36:00.829859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.595 [2024-10-08 18:36:00.830030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.595 [2024-10-08 18:36:00.830201] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.595 [2024-10-08 18:36:00.830209] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.595 [2024-10-08 18:36:00.830216] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.595 [2024-10-08 18:36:00.832949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.595 [2024-10-08 18:36:00.842475] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.595 [2024-10-08 18:36:00.842816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.595 [2024-10-08 18:36:00.842832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.595 [2024-10-08 18:36:00.842840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.595 [2024-10-08 18:36:00.843011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.595 [2024-10-08 18:36:00.843190] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.595 [2024-10-08 18:36:00.843199] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.595 [2024-10-08 18:36:00.843206] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.595 [2024-10-08 18:36:00.845941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.595 [2024-10-08 18:36:00.855460] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.595 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:07.595 [2024-10-08 18:36:00.855843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.595 [2024-10-08 18:36:00.855859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.595 [2024-10-08 18:36:00.855867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.595 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:07.595 [2024-10-08 18:36:00.856038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.595 [2024-10-08 18:36:00.856211] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.595 [2024-10-08 18:36:00.856219] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.595 [2024-10-08 18:36:00.856226] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.595 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:07.595 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:07.595 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:07.595 [2024-10-08 18:36:00.858963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.595 [2024-10-08 18:36:00.868491] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.595 [2024-10-08 18:36:00.868945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.595 [2024-10-08 18:36:00.868962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.595 [2024-10-08 18:36:00.868969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.595 [2024-10-08 18:36:00.869140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.595 [2024-10-08 18:36:00.869311] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.595 [2024-10-08 18:36:00.869320] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.595 [2024-10-08 18:36:00.869327] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.595 [2024-10-08 18:36:00.872062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.595 [2024-10-08 18:36:00.881468] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.595 [2024-10-08 18:36:00.881798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.595 [2024-10-08 18:36:00.881815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.595 [2024-10-08 18:36:00.881823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.595 [2024-10-08 18:36:00.881994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.595 [2024-10-08 18:36:00.882166] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.595 [2024-10-08 18:36:00.882176] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.595 [2024-10-08 18:36:00.882183] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.595 [2024-10-08 18:36:00.884922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.595 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.595 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:07.595 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.595 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:07.595 [2024-10-08 18:36:00.894454] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.595 [2024-10-08 18:36:00.894792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.595 [2024-10-08 18:36:00.894807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.595 [2024-10-08 18:36:00.894815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.595 [2024-10-08 18:36:00.894985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.595 [2024-10-08 18:36:00.895157] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.595 [2024-10-08 18:36:00.895167] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.595 [2024-10-08 18:36:00.895175] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.595 [2024-10-08 18:36:00.895515] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.595 [2024-10-08 18:36:00.897913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.595 [2024-10-08 18:36:00.907449] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.595 [2024-10-08 18:36:00.907734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.595 [2024-10-08 18:36:00.907749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.595 [2024-10-08 18:36:00.907757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.595 [2024-10-08 18:36:00.907928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.595 [2024-10-08 18:36:00.908099] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.595 [2024-10-08 18:36:00.908107] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.595 [2024-10-08 18:36:00.908114] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.595 [2024-10-08 18:36:00.910848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.853 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.853 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:07.853 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.853 [2024-10-08 18:36:00.920532] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.853 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:07.853 [2024-10-08 18:36:00.920813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.853 [2024-10-08 18:36:00.920830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.854 [2024-10-08 18:36:00.920837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.854 [2024-10-08 18:36:00.921009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.854 [2024-10-08 18:36:00.921181] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.854 [2024-10-08 18:36:00.921191] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.854 [2024-10-08 18:36:00.921207] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.854 [2024-10-08 18:36:00.923942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.854 [2024-10-08 18:36:00.933510] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.854 [2024-10-08 18:36:00.933805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.854 [2024-10-08 18:36:00.933821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.854 [2024-10-08 18:36:00.933830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.854 [2024-10-08 18:36:00.934002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.854 [2024-10-08 18:36:00.934173] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.854 [2024-10-08 18:36:00.934182] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.854 [2024-10-08 18:36:00.934189] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.854 [2024-10-08 18:36:00.936930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.854 Malloc0 00:28:07.854 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.854 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:07.854 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.854 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:07.854 [2024-10-08 18:36:00.946473] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.854 [2024-10-08 18:36:00.946829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.854 [2024-10-08 18:36:00.946846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.854 [2024-10-08 18:36:00.946854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.854 [2024-10-08 18:36:00.947025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.854 [2024-10-08 18:36:00.947198] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.854 [2024-10-08 18:36:00.947206] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.854 [2024-10-08 18:36:00.947213] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.854 [2024-10-08 18:36:00.949948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.854 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.854 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:07.854 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.854 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:07.854 [2024-10-08 18:36:00.959474] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.854 [2024-10-08 18:36:00.959844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.854 [2024-10-08 18:36:00.959860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a5c0 with addr=10.0.0.2, port=4420 00:28:07.854 [2024-10-08 18:36:00.959867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a5c0 is same with the state(6) to be set 00:28:07.854 [2024-10-08 18:36:00.960043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a5c0 (9): Bad file descriptor 00:28:07.854 [2024-10-08 18:36:00.960213] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.854 [2024-10-08 18:36:00.960222] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.854 [2024-10-08 18:36:00.960228] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.854 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.854 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:07.854 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.854 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:07.854 [2024-10-08 18:36:00.962964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.854 [2024-10-08 18:36:00.964941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:07.854 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.854 18:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 576970 00:28:07.854 [2024-10-08 18:36:00.972492] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.854 [2024-10-08 18:36:01.007281] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:09.046 4761.29 IOPS, 18.60 MiB/s [2024-10-08T16:36:03.746Z] 5581.88 IOPS, 21.80 MiB/s [2024-10-08T16:36:04.681Z] 6218.44 IOPS, 24.29 MiB/s [2024-10-08T16:36:05.616Z] 6732.40 IOPS, 26.30 MiB/s [2024-10-08T16:36:06.551Z] 7143.82 IOPS, 27.91 MiB/s [2024-10-08T16:36:07.487Z] 7493.17 IOPS, 29.27 MiB/s [2024-10-08T16:36:08.433Z] 7797.00 IOPS, 30.46 MiB/s [2024-10-08T16:36:09.372Z] 8044.64 IOPS, 31.42 MiB/s [2024-10-08T16:36:09.372Z] 8266.67 IOPS, 32.29 MiB/s 00:28:16.049 Latency(us) 00:28:16.049 [2024-10-08T16:36:09.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.049 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:16.049 Verification LBA range: start 0x0 length 0x4000 00:28:16.049 Nvme1n1 : 15.00 8272.50 32.31 13067.14 0.00 5978.74 415.45 15541.39 00:28:16.049 [2024-10-08T16:36:09.372Z] =================================================================================================================== 00:28:16.049 [2024-10-08T16:36:09.372Z] Total : 8272.50 32.31 13067.14 0.00 5978.74 415.45 15541.39 00:28:16.307 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:16.307 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:16.307 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.307 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:16.307 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.307 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:16.307 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:16.307 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:16.307 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:16.307 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:16.307 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:16.307 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:16.307 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:16.307 rmmod nvme_tcp 00:28:16.307 rmmod nvme_fabrics 00:28:16.307 rmmod nvme_keyring 00:28:16.566 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:16.566 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:16.566 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:16.566 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 577989 ']' 00:28:16.566 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 577989 00:28:16.566 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 577989 ']' 00:28:16.566 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 577989 00:28:16.566 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:28:16.566 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:16.566 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 577989 00:28:16.566 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:16.566 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:16.566 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 577989' 00:28:16.566 killing process with pid 577989 00:28:16.566 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 577989 00:28:16.566 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 577989 00:28:16.824 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:16.824 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:16.824 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:16.824 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:16.824 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:28:16.824 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:16.824 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:28:16.824 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:16.824 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:16.824 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.824 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.824 18:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.728 18:36:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:18.728 00:28:18.728 real 0m26.774s 00:28:18.728 user 1m2.972s 00:28:18.728 sys 0m6.776s 00:28:18.728 18:36:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:18.728 18:36:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:18.728 ************************************ 00:28:18.728 END TEST nvmf_bdevperf 00:28:18.728 ************************************ 00:28:18.728 18:36:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:18.728 18:36:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:18.728 18:36:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:18.728 18:36:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.728 ************************************ 00:28:18.728 START TEST nvmf_target_disconnect 00:28:18.728 ************************************ 00:28:18.728 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:18.988 * Looking for test storage... 00:28:18.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:18.988 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:18.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.989 --rc genhtml_branch_coverage=1 00:28:18.989 --rc genhtml_function_coverage=1 00:28:18.989 --rc genhtml_legend=1 00:28:18.989 --rc geninfo_all_blocks=1 00:28:18.989 --rc geninfo_unexecuted_blocks=1 00:28:18.989 00:28:18.989 ' 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:18.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.989 --rc genhtml_branch_coverage=1 00:28:18.989 --rc genhtml_function_coverage=1 00:28:18.989 --rc genhtml_legend=1 00:28:18.989 --rc geninfo_all_blocks=1 00:28:18.989 --rc geninfo_unexecuted_blocks=1 00:28:18.989 00:28:18.989 ' 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:18.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.989 --rc genhtml_branch_coverage=1 00:28:18.989 --rc genhtml_function_coverage=1 00:28:18.989 --rc genhtml_legend=1 00:28:18.989 --rc geninfo_all_blocks=1 00:28:18.989 --rc geninfo_unexecuted_blocks=1 00:28:18.989 00:28:18.989 ' 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:18.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.989 --rc genhtml_branch_coverage=1 00:28:18.989 --rc genhtml_function_coverage=1 00:28:18.989 --rc genhtml_legend=1 00:28:18.989 --rc geninfo_all_blocks=1 00:28:18.989 --rc geninfo_unexecuted_blocks=1 00:28:18.989 00:28:18.989 ' 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:18.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:18.989 18:36:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.557 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:25.558 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:25.558 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:25.558 Found net devices under 0000:86:00.0: cvl_0_0 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:25.558 Found net devices under 0000:86:00.1: cvl_0_1 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:25.558 18:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:25.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:25.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:28:25.558 00:28:25.558 --- 10.0.0.2 ping statistics --- 00:28:25.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.558 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:25.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:25.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:28:25.558 00:28:25.558 --- 10.0.0.1 ping statistics --- 00:28:25.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.558 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:25.558 ************************************ 00:28:25.558 START TEST nvmf_target_disconnect_tc1 00:28:25.558 ************************************ 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:25.558 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:25.559 [2024-10-08 18:36:18.341090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-10-08 18:36:18.341142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cb70 with addr=10.0.0.2, port=4420 00:28:25.559 [2024-10-08 18:36:18.341182] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:25.559 [2024-10-08 18:36:18.341197] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:25.559 [2024-10-08 18:36:18.341205] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:25.559 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:25.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:25.559 Initializing NVMe Controllers 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:25.559 00:28:25.559 real 0m0.118s 00:28:25.559 user 0m0.040s 00:28:25.559 sys 0m0.077s 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:25.559 ************************************ 00:28:25.559 END TEST nvmf_target_disconnect_tc1 00:28:25.559 ************************************ 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:25.559 ************************************ 00:28:25.559 START TEST nvmf_target_disconnect_tc2 00:28:25.559 ************************************ 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=583610 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 583610 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 583610 ']' 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:25.559 18:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.559 [2024-10-08 18:36:18.484236] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:28:25.559 [2024-10-08 18:36:18.484278] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.559 [2024-10-08 18:36:18.557747] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:25.559 [2024-10-08 18:36:18.627996] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.559 [2024-10-08 18:36:18.628042] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.559 [2024-10-08 18:36:18.628050] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.559 [2024-10-08 18:36:18.628056] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.559 [2024-10-08 18:36:18.628061] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.559 [2024-10-08 18:36:18.629714] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:28:25.559 [2024-10-08 18:36:18.629825] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:28:25.559 [2024-10-08 18:36:18.629909] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:28:25.559 [2024-10-08 18:36:18.629909] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:28:26.126 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:26.126 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:26.126 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:26.126 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:26.126 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.126 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.127 Malloc0 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.127 [2024-10-08 18:36:19.373482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.127 [2024-10-08 18:36:19.405747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=583708 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:26.127 18:36:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:28.710 18:36:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 583610 00:28:28.710 18:36:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 [2024-10-08 18:36:21.441110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Write completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 Read completed with error (sct=0, sc=8) 00:28:28.710 starting I/O failed 00:28:28.710 [2024-10-08 18:36:21.441309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:28.710 [2024-10-08 18:36:21.441468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.710 [2024-10-08 18:36:21.441488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.710 qpair failed and we were unable to recover it. 00:28:28.710 [2024-10-08 18:36:21.441648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.710 [2024-10-08 18:36:21.441663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.710 qpair failed and we were unable to recover it. 00:28:28.710 [2024-10-08 18:36:21.441762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.710 [2024-10-08 18:36:21.441775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.710 qpair failed and we were unable to recover it. 00:28:28.710 [2024-10-08 18:36:21.441881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.710 [2024-10-08 18:36:21.441891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.710 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.441969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.441979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.442123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.442133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.442228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.442249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.442334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.442346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.442519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.442530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.442679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.442689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.442841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.442852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.442929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.442939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.443021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.443031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.443112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.443122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.443279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.443311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.443425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.443457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.443579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.443612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.443731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.443761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.443870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.443902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.444094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.444106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.444247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.444257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.444458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.444469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.444600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.444610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.444691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.444701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.444843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.444853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.444941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.444952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.445056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.445088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.445275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.445306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.445435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.445469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.445589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.445599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.445669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.445678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.445747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.445758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.445817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.445826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.445920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.445930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.445985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.445995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.446189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.446199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.446270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.446280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.446342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.446352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.446431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.446441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.446624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.446634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.446771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.446781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.446915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.446925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.711 [2024-10-08 18:36:21.447071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.711 [2024-10-08 18:36:21.447081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.711 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.447142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.447152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.447226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.447236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.447327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.447337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.447504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.447539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.447715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.447747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.447931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.447964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.448089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.448099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.448171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.448180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.448316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.448348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.448473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.448506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.448623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.448654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.448761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.448793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.448977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.449009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.449271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.449302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.449564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.449598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.449717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.449749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.449875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.449913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.450026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.450057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.450181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.450214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.450393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.450404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.450527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.450537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.450673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.450684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.450818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.450828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.450951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.450961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.451041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.451070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.451185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.451216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.451340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.451372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.451577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.451610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.451730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.451763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.451883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.451914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.452123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.452155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.452339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.452372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.452557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.452567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.452630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.452640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.452698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.452727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.452840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.452871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.453102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.453135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.453234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.453267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.712 qpair failed and we were unable to recover it. 00:28:28.712 [2024-10-08 18:36:21.453444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.712 [2024-10-08 18:36:21.453455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.453641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.453674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.453778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.453810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.453926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.453957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.454101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.454134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Write completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Write completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Write completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Write completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Write completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Read completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Write completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 Write completed with error (sct=0, sc=8) 00:28:28.713 starting I/O failed 00:28:28.713 [2024-10-08 18:36:21.454457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.713 [2024-10-08 18:36:21.454516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.454530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.454608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.454620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.454782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.454794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.454883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.454895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.455044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.455075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.455250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.455281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.455463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.455496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.455635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.455668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.455860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.455892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.456066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.456097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.456231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.456243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.456372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.456389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.456449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.456461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.456595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.456607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.456737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.456749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.456812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.456824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.456906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.456918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.457051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.457082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.457270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.457302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.457495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.457528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.457736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.457755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.457900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.713 [2024-10-08 18:36:21.457916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.713 qpair failed and we were unable to recover it. 00:28:28.713 [2024-10-08 18:36:21.458059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.458075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.458230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.458262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.458391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.458424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.458533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.458567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.458782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.458816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.458990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.459022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.459131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.459164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.459296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.459329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.459530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.459546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.459634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.459675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.459859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.459891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.460014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.460045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.460154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.460188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.460387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.460403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.460497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.460513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.460665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.460680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.460834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.460866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.460980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.461012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.461121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.461152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.461339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.461371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.461492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.461507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.461588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.461604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.461764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.461796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.461923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.461955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.462191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.462223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.462413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.462429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.462510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.462525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.462677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.462709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.462843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.462875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.463074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.463105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.463221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.714 [2024-10-08 18:36:21.463250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.714 qpair failed and we were unable to recover it. 00:28:28.714 [2024-10-08 18:36:21.463452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.463469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.463697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.463712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.463814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.463845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.463971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.464004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.464205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.464237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.464424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.464440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.464604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.464621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.464695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.464711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.464796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.464813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.464951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.464968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.465036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.465050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.465153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.465168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.465303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.465325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.465405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.465421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.465506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.465521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.465613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.465629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.465810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.465841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.465956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.465987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.466115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.466146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.466315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.466344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.466494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.466511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.466610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.466643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.466758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.466789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.467044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.467076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.467342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.467362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.467542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.467563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.467658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.467680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.467848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.467869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.468096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.468117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.468269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.468290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.468527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.468549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.468658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.468679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.468840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.468881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.469142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.469173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.469294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.469324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.469596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.715 [2024-10-08 18:36:21.469623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.715 qpair failed and we were unable to recover it. 00:28:28.715 [2024-10-08 18:36:21.469846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.469868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.470029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.470050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.470272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.470304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.470505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.470537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.470789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.470822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.471083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.471116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.471336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.471367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.471547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.471569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.471770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.471803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.471985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.472018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.472203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.472235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.472405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.472429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.472665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.472688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.472803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.472827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.472932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.472955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.473116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.473139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.473362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.473406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.473522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.473553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.473734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.473766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.473999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.474031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.474129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.474162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.474355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.474406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.474644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.474667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.474827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.474850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.475077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.475100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.475252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.475274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.475492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.475515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.475691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.475724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.475903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.475937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.476055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.476087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.476258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.476280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.476373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.476402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.476615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.476638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.476798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.476820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.477000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.477033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.477288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.477320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.716 [2024-10-08 18:36:21.477537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.716 [2024-10-08 18:36:21.477561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.716 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.477711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.477734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.477839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.477862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.478039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.478061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.478210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.478237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.478345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.478367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.478545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.478569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.478720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.478752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.478989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.479022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.479236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.479279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.479396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.479420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.479525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.479547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.479741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.479764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.479924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.479947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.480105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.480138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.480405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.480439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.480546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.480578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.480768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.480800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.480941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.480973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.481215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.481248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.481431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.481473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.481652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.481684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.481923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.481955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.482081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.482114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.482220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.482253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.482445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.482479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.482598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.482631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.482907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.482948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.483070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.483103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.483278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.483310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.483492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.483526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.483740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.483779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.484018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.484050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.484180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.484213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.484396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.484430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.484689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.484712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.484801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.484822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.484975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.485015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.485121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.717 [2024-10-08 18:36:21.485154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.717 qpair failed and we were unable to recover it. 00:28:28.717 [2024-10-08 18:36:21.485297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.485336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.485461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.485486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.485708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.485740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.485912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.485944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.486134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.486166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.486421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.486445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.486617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.486641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.486742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.486763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.486870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.486891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.487048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.487069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.487241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.487264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.487363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.487391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.487475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.487496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.487583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.487605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.487710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.487732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.487895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.487917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.488162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.488185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.488312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.488345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.488477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.488510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.488686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.488719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.488841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.488873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.488977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.489008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.489135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.489168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.489289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.489312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.489464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.489495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.489707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.489729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.489890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.489913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.489995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.490016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.490169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.490202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.490442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.490477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.490662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.490694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.490897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.490920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.491030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.491052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.491143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.491173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.491339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.491362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.491479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.491512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.718 qpair failed and we were unable to recover it. 00:28:28.718 [2024-10-08 18:36:21.491686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.718 [2024-10-08 18:36:21.491718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.491845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.491877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.492065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.492097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.492218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.492251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.492370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.492412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.492594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.492627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.492750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.492782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.492887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.492920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.493163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.493196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.493313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.493345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.493591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.493670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.493822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.493859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.494040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.494074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.494270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.494303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.494435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.494471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.494651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.494683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.494873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.494907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.495106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.495138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.495270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.495303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.495422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.495448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.495613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.495636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.495871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.495904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.496091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.496123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.496304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.496337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.496538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.496577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.496818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.496849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.496973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.497005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.497186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.497219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.497398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.497421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.497590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.497612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.497855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.719 [2024-10-08 18:36:21.497888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.719 qpair failed and we were unable to recover it. 00:28:28.719 [2024-10-08 18:36:21.498162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.498195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.498396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.498419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.498602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.498634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.498901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.498934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.499200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.499233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.499428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.499462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.499639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.499671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.499805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.499838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.500025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.500056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.500254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.500287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.500474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.500507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.500692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.500715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.500959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.500992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.501180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.501212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.501437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.501471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.501592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.501625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.501756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.501788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.501965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.501998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.502178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.502212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.502445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.502469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.502659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.502690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.502827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.502860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.503047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.503078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.503338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.503370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.503491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.503524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.503646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.503678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.503848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.503880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.504120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.504153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.504396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.504430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.504552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.504575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.504790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.504812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.504963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.505004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.505184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.505215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.505399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.505422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.505519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.505544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.505695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.505717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.505881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.505913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.720 qpair failed and we were unable to recover it. 00:28:28.720 [2024-10-08 18:36:21.506152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.720 [2024-10-08 18:36:21.506184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.506419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.506452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.506647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.506670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.506836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.506868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.507038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.507070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.507329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.507351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.507544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.507576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.507691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.507724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.507965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.507997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.508117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.508149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.508261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.508293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.508474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.508509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.508679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.508702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.508856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.508888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.509058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.509092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.509283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.509314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.509582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.509606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.509777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.509799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.509953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.509986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.510168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.510202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.510315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.510347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.510572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.510606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.510844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.510877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.511139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.511171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.511357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.511406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.511579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.511602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.511754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.511776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.511879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.511900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.512025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.512059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.512296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.512329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.512518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.512551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.512743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.512766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.513005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.513027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.513178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.513201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.513363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.513401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.513589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.721 [2024-10-08 18:36:21.513611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.721 qpair failed and we were unable to recover it. 00:28:28.721 [2024-10-08 18:36:21.513881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.513923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.514063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.514096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.514223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.514256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.514498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.514532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.514728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.514760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.514960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.514993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.515186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.515217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.515434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.515467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.515605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.515628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.515856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.515888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.516095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.516128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.516317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.516349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.516587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.516621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.516760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.516792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.516916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.516949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.517074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.517108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.517294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.517317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.517477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.517501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.517676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.517699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.517861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.517883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.518049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.518072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.518237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.518270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.518510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.518544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.518751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.518785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.518979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.519012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.519208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.519241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.519422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.519456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.519659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.519693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.519873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.519906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.520092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.520130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.520302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.520325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.520497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.520520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.520611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.520632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.520793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.520815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.520984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.521017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.521206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.521228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.521454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.521477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.521650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.521673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.521895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.521918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.522132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.722 [2024-10-08 18:36:21.522155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.722 qpair failed and we were unable to recover it. 00:28:28.722 [2024-10-08 18:36:21.522261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.522282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.522444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.522468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.522666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.522698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.522910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.522944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.523180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.523213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.523341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.523384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.523501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.523534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.523643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.523666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.523899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.523931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.524127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.524160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.524333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.524365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.524489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.524511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.524723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.524756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.524885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.524916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.525086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.525119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.525355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.525383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.525540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.525577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.525695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.525727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.525843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.525876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.526005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.526037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.526167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.526199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.526505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.526540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.526656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.526678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.526895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.526919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.527100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.527123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.527274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.527296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.527400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.527422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.527685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.527709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.527924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.527947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.528125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.528157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.528420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.528444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.528599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.528621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.528717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.528738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.528978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.529000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.529111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.529150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.529365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.529419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.529552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.529585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.529773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.723 [2024-10-08 18:36:21.529796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.723 qpair failed and we were unable to recover it. 00:28:28.723 [2024-10-08 18:36:21.529964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.529996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.530257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.530290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.530551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.530585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.530759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.530791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.530908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.530941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.531066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.531098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.531364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.531408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.531532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.531566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.531821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.531845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.531994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.532016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.532177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.532200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.532312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.532335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.532511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.532534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.532696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.532728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.532858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.532891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.533009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.533042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.533165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.533198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.533493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.533567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.533720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.533756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.533936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.533980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.534087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.534112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.534299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.534331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.534674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.534711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.534899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.534932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.535149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.535182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.535420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.535443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.535626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.535659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.535845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.535878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.535998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.536031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.536172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.536205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.536323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.536356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.536501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.536534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.536817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.536849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.724 qpair failed and we were unable to recover it. 00:28:28.724 [2024-10-08 18:36:21.536970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.724 [2024-10-08 18:36:21.537003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.537118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.537151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.537266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.537299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.537488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.537512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.537731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.537764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.537947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.537978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.538241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.538274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.538390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.538424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.538610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.538644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.538846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.538879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.539118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.539150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.539339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.539387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.539515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.539549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.539727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.539767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.539879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.539905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.540067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.540100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.540356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.540396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.540517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.540550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.540734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.540756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.541005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.541038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.541292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.541324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.541543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.541567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.541800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.541832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.541962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.541995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.542167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.542199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.542331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.542364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.542559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.542583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.542697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.542730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.542973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.543005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.543200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.543232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.543532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.543556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.543716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.543739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.543929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.543952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.544110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.544132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.544387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.544420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.544554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.544586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.544789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.544821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.544950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.544982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.545184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.545217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.545426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.545467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.725 [2024-10-08 18:36:21.545586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.725 [2024-10-08 18:36:21.545609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.725 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.545774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.545797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.545960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.545983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.546145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.546177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.546319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.546353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.546494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.546528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.546736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.546759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.546994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.547016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.547193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.547216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.547327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.547350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.547547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.547580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.547778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.547811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.547998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.548031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.548238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.548271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.548405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.548444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.548688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.548721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.548844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.548876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.549048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.549081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.549252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.549284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.549474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.549498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.549661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.549684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.549857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.549890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.549998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.550028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.550141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.550174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.550304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.550336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.550580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.550603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.550825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.550847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.550947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.550970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.551072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.551093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.551250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.551273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.551538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.551572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.551699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.551731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.551983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.552016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.552197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.552229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.552418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.552442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.552611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.726 [2024-10-08 18:36:21.552633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.726 qpair failed and we were unable to recover it. 00:28:28.726 [2024-10-08 18:36:21.552811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.552843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.553130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.553162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.553348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.553411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.553662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.553684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.553864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.553886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.554044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.554071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.554225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.554247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.554466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.554490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.554660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.554682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.554830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.554853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.554950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.554971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.555154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.555177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.555370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.555399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.555554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.555577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.555675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.555697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.555792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.555812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.555960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.555983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.556149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.556171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.556350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.556392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.556516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.556549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.556791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.556824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.556948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.556979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.557167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.557200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.557439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.557463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.557618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.557650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.557751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.557785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.557982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.558013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.558145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.558178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.558347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.558388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.558627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.558659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.558830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.558862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.558982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.559015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.559257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.559289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.559557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.559591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.559781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.559804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.559892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.559912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.560076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.560099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.727 qpair failed and we were unable to recover it. 00:28:28.727 [2024-10-08 18:36:21.560211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.727 [2024-10-08 18:36:21.560233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.560404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.560438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.560542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.560574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.560784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.560817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.560928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.560961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.561155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.561187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.561369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.561405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.561495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.561516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.561613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.561636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.561802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.561828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.561991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.562014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.562119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.562140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.562306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.562331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.562521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.562544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.562710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.562732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.562848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.562871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.562967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.562987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.563189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.563223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.563425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.563459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.563658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.563697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.563917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.563939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.564090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.564113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.564280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.564303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.564496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.564520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.564707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.564730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.564878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.564902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.565057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.565090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.565273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.565306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.565431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.565465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.565737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.565770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.565880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.565912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.566083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.566116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.566302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.566335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.566518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.566552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.566675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.566707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.566879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.566911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.567099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.567136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.567306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.728 [2024-10-08 18:36:21.567339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.728 qpair failed and we were unable to recover it. 00:28:28.728 [2024-10-08 18:36:21.567489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.567512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.567690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.567712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.567873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.567896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.568060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.568083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.568325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.568357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.568626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.568660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.568788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.568812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.568999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.569031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.569268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.569301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.569424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.569458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.569651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.569682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.569859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.569883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.570053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.570087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.570261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.570294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.570483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.570507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.570674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.570706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.570877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.570913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.571107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.571140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.571384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.571418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.571594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.571626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.571884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.571916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.572155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.572188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.572396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.572430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.572643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.572675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.572931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.572955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.573143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.573166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.573340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.573363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.573477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.573500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.573666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.573688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.573796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.573829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.574073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.574107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.574297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.574329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.574527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.729 [2024-10-08 18:36:21.574562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.729 qpair failed and we were unable to recover it. 00:28:28.729 [2024-10-08 18:36:21.574745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.574777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.575037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.575070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.575274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.575307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.575445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.575469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.575638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.575679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.575814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.575846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.576019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.576058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.576241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.576274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.576401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.576435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.576619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.576652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.576860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.576883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.577126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.577159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.577354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.577413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.577533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.577570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.577676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.577698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.577869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.577901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.578024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.578060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.578313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.578346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.578487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.578520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.578638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.578671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.578938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.578961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.579128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.579151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.579370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.579400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.579499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.579522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.579698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.579720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.579937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.579959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.580060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.580083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.580304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.580336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.580478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.580513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.580737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.580769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.581031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.581054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.581227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.581250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.581355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.581390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.581557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.581584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.581688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.581709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.581963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.581986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.582152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.582174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.730 [2024-10-08 18:36:21.582343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.730 [2024-10-08 18:36:21.582366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.730 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.582533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.582556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.582735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.582758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.582929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.582951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.583114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.583137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.583351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.583373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.583595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.583618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.583729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.583751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.583973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.583996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.584147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.584169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.584278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.584302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.584402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.584425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.584666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.584689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.584786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.584807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.585002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.585025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.585133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.585156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.585307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.585329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.585416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.585438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.585549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.585570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.585681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.585703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.585923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.585946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.586108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.586131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.586306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.586328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.586437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.586459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.586626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.586649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.586811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.586834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.586994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.587017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.587111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.587133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.587322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.587344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.587527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.587551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.587705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.587728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.587813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.587834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.588052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.588074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.588234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.588257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.588365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.588395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.588612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.588634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.588791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.588814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.588971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.588997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.731 [2024-10-08 18:36:21.589090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.731 [2024-10-08 18:36:21.589111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.731 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.589210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.589232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.589329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.589350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.589559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.589582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.589690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.589713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.589936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.589958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.590130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.590153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.590332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.590355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.590520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.590543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.590650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.590673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.590791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.590813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.590910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.590932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.591176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.591209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.591426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.591460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.591670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.591702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.591943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.591976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.592150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.592183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.592386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.592420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.592604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.592637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.592768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.592790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.592883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.592906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.593149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.593183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.593287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.593320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.593573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.593615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.593766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.593788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.593969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.593992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.594165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.594188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.594355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.594382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.594532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.594555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.594649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.594670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.594777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.594799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.594994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.595027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.595230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.595263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.595510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.595533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.595710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.595733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.595922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.595945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.596119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.596151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.596340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.596373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.596671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.732 [2024-10-08 18:36:21.596704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.732 qpair failed and we were unable to recover it. 00:28:28.732 [2024-10-08 18:36:21.596893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.596925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.597034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.597066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.597330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.597363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.597551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.597585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.597792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.597815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.597979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.598002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.598121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.598153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.598395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.598428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.598615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.598648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.598831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.598854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.599029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.599051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.599155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.599178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.599267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.599290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.599390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.599413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.599507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.599528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.599692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.599735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.599911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.599942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.600130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.600163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.600302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.600335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.600545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.600579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.600777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.600810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.601061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.601094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.601332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.601364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.601592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.601625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.601904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.601927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.602032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.602054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.602213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.602236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.602484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.602518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.602631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.602668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.602858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.602891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.603128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.603160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.603274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.733 [2024-10-08 18:36:21.603307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.733 qpair failed and we were unable to recover it. 00:28:28.733 [2024-10-08 18:36:21.603480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.603504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.603749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.603771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.604006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.604029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.604196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.604219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.604392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.604426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.604607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.604630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.604808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.604840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.605012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.605044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.605177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.605209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.605334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.605367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.605581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.605605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.605705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.605727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.605811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.605832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.606046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.606069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.606174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.606196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.606285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.606306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.606466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.606490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.606679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.606702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.606854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.606895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.607012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.607044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.607167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.607200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.607491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.607514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.607611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.607634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.607780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.607821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.607952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.607985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.608172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.608206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.608338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.608370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.608591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.608624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.608804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.608827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.608942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.608974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.609158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.609192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.609308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.609340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.609531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.609554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.609705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.609728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.609848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.609880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.610090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.610123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.610329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.610361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.610498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.610521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.610678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.610700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.610816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.734 [2024-10-08 18:36:21.610839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.734 qpair failed and we were unable to recover it. 00:28:28.734 [2024-10-08 18:36:21.611076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.611109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.611228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.611261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.611458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.611493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.611676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.611700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.611856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.611888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.612010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.612042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.612158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.612191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.612407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.612441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.612617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.612640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.612789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.612812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.612972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.612995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.613221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.613254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.613510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.613544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.613669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.613693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.613777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.613797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.614040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.614073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.614277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.614310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.614549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.614581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.614883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.614915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.615096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.615129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.615302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.615335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.615514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.615537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.615722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.615745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.615924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.615947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.616164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.616190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.616299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.616321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.616427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.616449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.616601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.616623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.616778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.616810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.616936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.616968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.617171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.617204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.617383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.617416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.617654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.617686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.617816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.617849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.618136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.618168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.618356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.618396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.618632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.618666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.618793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.618816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.735 qpair failed and we were unable to recover it. 00:28:28.735 [2024-10-08 18:36:21.618918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.735 [2024-10-08 18:36:21.618941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.619160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.619183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.619393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.619426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.619547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.619580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.619749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.619781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.619910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.619931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.620094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.620116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.620318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.620340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.620459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.620483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.620651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.620673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.620788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.620821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.620958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.620990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.621178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.621210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.621394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.621429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.621629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.621652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.621738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.621758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.622012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.622035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.622203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.622225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.622311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.622331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.622483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.622507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.622747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.622769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.622871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.622893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.623040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.623080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.623272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.623305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.623416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.623439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.623605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.623627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.623726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.623749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.623837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.623866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.624041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.624064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.624221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.624253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.624444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.624479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.624616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.624648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.624754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.624777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.624898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.624921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.625077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.625110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.625323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.625355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.625537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.625571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.625750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.625772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.625876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.625898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.626048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.736 [2024-10-08 18:36:21.626070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.736 qpair failed and we were unable to recover it. 00:28:28.736 [2024-10-08 18:36:21.626176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.626218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.626353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.626397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.626570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.626603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.626842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.626875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.627127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.627159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.627346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.627389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.627591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.627624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.627822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.627855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.628114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.628146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.628391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.628426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.628633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.628664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.628794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.628817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.628907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.628928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.629076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.629099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.629321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.629359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.629563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.629596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.629868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.629901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.630084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.630116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.630285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.630317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.630541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.630575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.630746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.630769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.630953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.630985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.631294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.631326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.631545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.631579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.631767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.631798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.631999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.632031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.632224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.632256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.632464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.632499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.632616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.632649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.632838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.632870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.633006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.633039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.633241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.633274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.633450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.633484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.633601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.633634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.633869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.633892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.633999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.634021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.634183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.634226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.634491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.634525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.634655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.634688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.634925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.634958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.737 [2024-10-08 18:36:21.635139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.737 [2024-10-08 18:36:21.635171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.737 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.635347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.635388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.635516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.635551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.635814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.635847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.636300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.636339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.636486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.636520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.636675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.636699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.636855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.636878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.637124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.637146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.637245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.637284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.637419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.637454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.637583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.637614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.637814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.637837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.638008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.638031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.638120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.638141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.638297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.638325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.638484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.638508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.638676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.638709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.638967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.638999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.639111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.639142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.639365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.639409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.639519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.639541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.639711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.639733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.639913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.639935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.640086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.640108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.640218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.640240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.640408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.640432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.640542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.640564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.640736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.640759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.641031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.641054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.641273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.641296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.641520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.641544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.641703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.641725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.641889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.641913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.642062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.642085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.642303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.642336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.642624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.738 [2024-10-08 18:36:21.642657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.738 qpair failed and we were unable to recover it. 00:28:28.738 [2024-10-08 18:36:21.642838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.642875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.643038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.643061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.643286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.643309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.643529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.643554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.643667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.643690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.643845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.643871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.644048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.644071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.644301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.644323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.644434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.644459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.644633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.644658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.644759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.644792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.644983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.645016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.645142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.645176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.645361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.645406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.645523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.645562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.645712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.645734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.645815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.645836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.645999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.646021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.646207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.646240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.646449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.646484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.646590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.646622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.646885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.646908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.647054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.647077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.647176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.647198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.647453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.647494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.647592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.647613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.647727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.647750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.647900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.647922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.648030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.648071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.648208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.648241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.648447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.648481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.648662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.648685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.648775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.648796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.648902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.648925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.649030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.649053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.649237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.649259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.649408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.649443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.649621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.649652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.649825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.649859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.650049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.650072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.739 qpair failed and we were unable to recover it. 00:28:28.739 [2024-10-08 18:36:21.650328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.739 [2024-10-08 18:36:21.650360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.650584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.650618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.650790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.650812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.650959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.650981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.651133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.651156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.651325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.651358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.651542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.651580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.651821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.651852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.652138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.652172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.652434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.652468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.652716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.652749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.652868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.652900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.653093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.653126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.653415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.653450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.653720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.653753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.653980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.654004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.654105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.654127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.654301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.654324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.654418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.654441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.654681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.654704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.654855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.654896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.655014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.655047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.655305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.655338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.655601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.655636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.655820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.655852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.656119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.656151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.656438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.656473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.656658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.656680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.656928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.656969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.657252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.657283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.657478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.657511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.657757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.657790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.658077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.658109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.658390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.658424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.658662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.658695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.658987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.659019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.659278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.659310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.659435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.659470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.659704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.659726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.659902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.659933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.660138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.740 [2024-10-08 18:36:21.660170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.740 qpair failed and we were unable to recover it. 00:28:28.740 [2024-10-08 18:36:21.660352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.660396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.660532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.660564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.660773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.660805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.660992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.661024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.661309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.661340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.661638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.661672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.661954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.662044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.662348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.662397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.662691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.662724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.663011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.663043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.663316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.663349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.663559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.663593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.663861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.663894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.664182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.664215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.664479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.664514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.664711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.664737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.664986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.665009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.665176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.665199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.665437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.665470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.665649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.665680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.665863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.665887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.666106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.666138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.666349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.666392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.666601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.666634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.666844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.666879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.667071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.667103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.667382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.667416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.667724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.667759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.668001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.668033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.668342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.668384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.668628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.668660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.668915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.668946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.669188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.669221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.669432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.669469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.669674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.669706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.669944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.669978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.670186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.670219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.670485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.670537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.670807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.670841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.741 [2024-10-08 18:36:21.671081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.741 [2024-10-08 18:36:21.671107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.741 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.671270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.671293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.671443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.671466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.671686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.671718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.672003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.672036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.672252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.672285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.672550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.672585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.672871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.672911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.673120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.673154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.673343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.673386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.673566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.673598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.673783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.673826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.673934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.673956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.674172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.674195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.674436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.674469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.674718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.674751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.675039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.675073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.675303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.675336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.675615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.675648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.675884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.675916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.676121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.676143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.676409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.676448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.676700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.676733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.677001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.677024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.677249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.677272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.677498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.677533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.677784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.677816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.678102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.678133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.678387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.678423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.678636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.678668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.678914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.678937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.679202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.679245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.679486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.679520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.679817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.679851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.680109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.680143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.680348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.680392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.680582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.742 [2024-10-08 18:36:21.680613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.742 qpair failed and we were unable to recover it. 00:28:28.742 [2024-10-08 18:36:21.680874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.680898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.681122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.681145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.681359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.681400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.681626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.681651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.681803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.681826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.682066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.682099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.682282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.682315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.682498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.682533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.682782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.682805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.682972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.682995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.683166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.683198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.683386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.683427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.683640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.683674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.683849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.683882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.684070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.684103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.684371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.684415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.684594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.684626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.684886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.684918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.685155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.685178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.685391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.685415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.685614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.685646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.685862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.685894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.686132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.686166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.686383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.686418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.686609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.686642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.686894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.686927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.687212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.687235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.687419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.687443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.687661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.687685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.687941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.687964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.688138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.688162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.688391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.688425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.688615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.688649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.688913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.688946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.689068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.689101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.689310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.689343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.689649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.689683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.689933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.689965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.690146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.690179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.690363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.743 [2024-10-08 18:36:21.690409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.743 qpair failed and we were unable to recover it. 00:28:28.743 [2024-10-08 18:36:21.690600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.690633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.690831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.690855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.691027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.691060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.691319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.691352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.691534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.691568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.691744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.691776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.692078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.692101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.692332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.692354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.692536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.692559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.692734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.692757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.692996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.693019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.693279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.693304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.693572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.693603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.693702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.693723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.693911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.693934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.694194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.694217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.694452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.694477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.694652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.694675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.694896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.694919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.695092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.695114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.695355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.695385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.695628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.695652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.695871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.695895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.696113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.696136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.696304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.696327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.696574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.696599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.696849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.696873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.697034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.697057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.697232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.697256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.697454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.697479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.697740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.697765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.697938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.697961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.698190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.698213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.698472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.698496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.698724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.698747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.698986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.699010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.699255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.699278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.699449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.699472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.699643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.699668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.699912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.699941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.700130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.744 [2024-10-08 18:36:21.700153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.744 qpair failed and we were unable to recover it. 00:28:28.744 [2024-10-08 18:36:21.700318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.700341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.700597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.700622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.700879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.700901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.701090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.701114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.701387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.701411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.701598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.701621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.701865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.701887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.702055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.702079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.702240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.702263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.702437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.702461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.702643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.702665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.702900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.702923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.703167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.703189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.703355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.703384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.703631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.703655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.703896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.703923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.704149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.704184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.704287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.704310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.704550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.704575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.704823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.704846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.705072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.705095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.705181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.705202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.705452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.705478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.705704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.705729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.705984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.706007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.706174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.706197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.706386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.706411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.706659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.706683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.706794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.706818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.706910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.706933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.707174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.707197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.707307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.707329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.707522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.707547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.707708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.707731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.707926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.707954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.708205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.708230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.708455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.708479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.708586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.708609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.708828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.708851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.709079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.709112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.745 qpair failed and we were unable to recover it. 00:28:28.745 [2024-10-08 18:36:21.709304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.745 [2024-10-08 18:36:21.709330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.709506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.709530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.709811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.709835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.709989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.710013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.710190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.710213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.710409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.710433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.710669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.710693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.710883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.710906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.711130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.711154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.711394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.711419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.711599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.711623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.711790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.711814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.712057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.712079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.712307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.712331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.712574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.712599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.712849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.712872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.713112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.713136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.713244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.713266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.713486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.713510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.713613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.713635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.713830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.713860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.714019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.714042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.714278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.714302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.714560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.714585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.714831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.714854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.715019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.715042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.715264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.715291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.715552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.715577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.715750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.715773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.715945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.715969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.716215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.716238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.716481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.716505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.716673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.716696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.716952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.716978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.717144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.717168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.717410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.717433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.717625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.717649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.746 [2024-10-08 18:36:21.717821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.746 [2024-10-08 18:36:21.717844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.746 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.718104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.718126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.718242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.718265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Write completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Write completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Write completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Write completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Write completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Read completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 Write completed with error (sct=0, sc=8) 00:28:28.747 starting I/O failed 00:28:28.747 [2024-10-08 18:36:21.718926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:28.747 [2024-10-08 18:36:21.719216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.719273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.719426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.719464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.719738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.719772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.719960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.719994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.720292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.720326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.720613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.720641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.720765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.720786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.721021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.721044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.721287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.721311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.721553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.721577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.721806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.721829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.721984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.722007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.722251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.722274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.722468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.722493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.722756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.722779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.723029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.723052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.723219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.723243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.723412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.723437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.723705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.723728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.723948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.723972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.724284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.724311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.724533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.724557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.724662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.724683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.724854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.724877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.725140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.725163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.725266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.725288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.725461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.747 [2024-10-08 18:36:21.725485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.747 qpair failed and we were unable to recover it. 00:28:28.747 [2024-10-08 18:36:21.725713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.725736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.725981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.726005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.726255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.726278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.726477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.726502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.726679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.726702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.726887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.726910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.727186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.727209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.727329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.727357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.727542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.727566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.727812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.727836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.728039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.728063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.728227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.728250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.728510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.728535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.728784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.728813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.728981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.729004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.729250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.729274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.729465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.729490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.729688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.729711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.729885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.729908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.730133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.730156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.730332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.730361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.730541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.730565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.730728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.730752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.731002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.731026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.731223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.731245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.731496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.731521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.731687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.731710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.731880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.731904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.732167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.732192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.732373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.732407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.732523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.732545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.732795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.732818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.733069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.733093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.733297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.733322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.733584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.733609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.733859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.733883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.734060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.734084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.734331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.734354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.734674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.734714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.734949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.734983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.748 [2024-10-08 18:36:21.735250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.748 [2024-10-08 18:36:21.735285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.748 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.735519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.735554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.735825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.735858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.736078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.736113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.736339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.736372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.736676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.736711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.736902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.736936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.737182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.737215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.737417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.737453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.737724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.737757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.737949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.737981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.738212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.738239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.738350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.738384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.738634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.738659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.738870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.738897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.739135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.739161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.739436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.739463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.739701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.739726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.739928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.739953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.740075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.740104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.740352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.740384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.740585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.740612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.740733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.740757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.740984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.741008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.741206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.741229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.741496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.741521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.741752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.741776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.741968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.741992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.742215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.742239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.742482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.742508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.742689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.742721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.742890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.742915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.743081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.743107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.743337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.743362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.743570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.743599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.743809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.743839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.744081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.744113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.744292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.744318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.744547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.744573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.744777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.749 [2024-10-08 18:36:21.744804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.749 qpair failed and we were unable to recover it. 00:28:28.749 [2024-10-08 18:36:21.745075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.745101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.745357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.745392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.745634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.745659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.745888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.745914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.746167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.746196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.746451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.746476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.746653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.746679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.746855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.746880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.747187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.747218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.747392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.747422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.747677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.747702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.747974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.748000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.748262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.748292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.748488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.748520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.748772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.748799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.748967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.748992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.749165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.749190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.749372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.749409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.749642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.749668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.749848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.749874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.750079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.750107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.750341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.750371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.750646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.750672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.750898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.750924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.751181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.751206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.751460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.751487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.751714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.751741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.751986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.752017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.752195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.752227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.752400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.752426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.752656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.752682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.752865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.752890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.753117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.753144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.753383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.753409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.753600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.753625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.753795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.753820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.750 [2024-10-08 18:36:21.754007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.750 [2024-10-08 18:36:21.754032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.750 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.754278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.754304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.754555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.754582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.754776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.754803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.755077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.755102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.755272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.755299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.755545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.755572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.755824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.755847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.756018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.756041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.756264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.756289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.756484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.756509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.756675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.756698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.756855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.756879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.757109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.757183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.757495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.757535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.757812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.757846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.758094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.758128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.758373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.758435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.758708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.758743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.759025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.759059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.759305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.759337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.759541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.759577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.759850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.759879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.760130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.760156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.760434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.760461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.760624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.760650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.760849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.760875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.761061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.761089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.761319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.761344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.761594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.761622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.761879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.761905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.762085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.762109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.762393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.762420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.762706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.762734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.763013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.751 [2024-10-08 18:36:21.763038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.751 qpair failed and we were unable to recover it. 00:28:28.751 [2024-10-08 18:36:21.763151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.763176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.763435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.763462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.763712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.763749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.764008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.764033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.764161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.764185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.764359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.764405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.764603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.764637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.764856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.764890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.765084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.765116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.765308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.765342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.765541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.765575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.765771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.765799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.766073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.766097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.766270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.766294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.766457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.766482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.766650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.766674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.766895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.766919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.767044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.767068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.767179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.767201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.767427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.767451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.767700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.767724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.767969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.767994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.768169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.768195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.768373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.768405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.768627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.768651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.768822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.768846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.769106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.769130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.769360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.769404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.769648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.769671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.769897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.769922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.770089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.770114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.770390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.770416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.770533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.770568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.770749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.770774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.770934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.770959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.771181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.771207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.752 [2024-10-08 18:36:21.771455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.752 [2024-10-08 18:36:21.771482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.752 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.771641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.771668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.771841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.771866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.772063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.772088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.772274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.772299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.772490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.772516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.772770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.772796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.773021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.773063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.773189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.773215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.773401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.773427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.773686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.773712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.773965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.773990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.774162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.774186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.774339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.774364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.774579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.774603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.774829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.774853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.775010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.775034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.775230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.775254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.775502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.775527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.775794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.775818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.775978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.776002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.776165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.776188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.776419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.776444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.776687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.776716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.776894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.776918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.777076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.777100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.777343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.777370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.777611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.777636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.777859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.777883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.778041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.778072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.778335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.778362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.778541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.778570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.778819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.778848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.779038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.779061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.779231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.779256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.779344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.779367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.779503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.779526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.779642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.779665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.779919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.779945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.780190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.780221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.780474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.780503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.780698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.780726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.780978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.781007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.781231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.753 [2024-10-08 18:36:21.781255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.753 qpair failed and we were unable to recover it. 00:28:28.753 [2024-10-08 18:36:21.781424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.781450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.781642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.781667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.781788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.781815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.782051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.782078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.782235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.782260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.782505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.782530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.782784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.782809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.782995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.783019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.783210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.783252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.783423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.783449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.783770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.783794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.784016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.784040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.784270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.784294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.784485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.784509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.784738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.784763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.784925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.784949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.785208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.785231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.785406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.785430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.785663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.785686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.785941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.785965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.786220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.786250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.786473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.786498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.786657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.786680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.786899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.786922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.787008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.787029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.787249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.787272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.787428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.787451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.787680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.787703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.787871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.787896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.788052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.788075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.788289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.788313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.788554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.788579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.788749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.788772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.789018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.789041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.789210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.789233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.789471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.789497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.789669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.789692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.789870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.754 [2024-10-08 18:36:21.789893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.754 qpair failed and we were unable to recover it. 00:28:28.754 [2024-10-08 18:36:21.790004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.790027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.790195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.790218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.790435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.790460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.790627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.790651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.790927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.790950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.791173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.791196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.791440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.791464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.791681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.791703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.791856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.791880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.792071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.792094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.792330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.792352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.792631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.792655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.792812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.792836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.793027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.793050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.793237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.793272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.793440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.793465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.793564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.793584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.793840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.793864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.794090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.794112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.794341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.794363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.794478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.794502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.794747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.794779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.794952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.794985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.795309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.795394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.795548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.795594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.795887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.795922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.796213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.796247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.796493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.796547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.796789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.796837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.797066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.797112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.797273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.797319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.797661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.797711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.798006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.798052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.798255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.798302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.798608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.798648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.798893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.798927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.799178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.799221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.799413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.799450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.755 [2024-10-08 18:36:21.799712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.755 [2024-10-08 18:36:21.799745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.755 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.800014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.800048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.800239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.800272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.800525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.800560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.800753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.800787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.801050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.801083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.801341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.801388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.801596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.801631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.801888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.801920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.802187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.802221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.802508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.802546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.802814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.802847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.803058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.803092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.803264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.803298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.803564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.803600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.803842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.803875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.804050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.804083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.804350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.804396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.804577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.804610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.804851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.804884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.805019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.805053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.805292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.805325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.805605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.805640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.805778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.805811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.806073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.806107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.806361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.806408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.806605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.806637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.806851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.806884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.807146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.807180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.807457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.807492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.807769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.756 [2024-10-08 18:36:21.807803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.756 qpair failed and we were unable to recover it. 00:28:28.756 [2024-10-08 18:36:21.808074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-10-08 18:36:21.808108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-10-08 18:36:21.808401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-10-08 18:36:21.808437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-10-08 18:36:21.808697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-10-08 18:36:21.808731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-10-08 18:36:21.808998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-10-08 18:36:21.809032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-10-08 18:36:21.809322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-10-08 18:36:21.809355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-10-08 18:36:21.809522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-10-08 18:36:21.809558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-10-08 18:36:21.809687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-10-08 18:36:21.809720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-10-08 18:36:21.809988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-10-08 18:36:21.810028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-10-08 18:36:21.810210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-10-08 18:36:21.810244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-10-08 18:36:21.810430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-10-08 18:36:21.810466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-10-08 18:36:21.810731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-10-08 18:36:21.810765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-10-08 18:36:21.810951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-10-08 18:36:21.810985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-10-08 18:36:21.811268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-10-08 18:36:21.811301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-10-08 18:36:21.811556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-10-08 18:36:21.811591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-10-08 18:36:21.811802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-10-08 18:36:21.811836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.759 qpair failed and we were unable to recover it. 00:28:28.759 [2024-10-08 18:36:21.812086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.759 [2024-10-08 18:36:21.812120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.812361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.812406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.812675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.812708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.812899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.812932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.813215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.813248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.813525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.813559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.813836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.813870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.814047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.814081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.814372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.814415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.814624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.814657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.814917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.814951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.815139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.815173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.815396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.815430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.815672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.815706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.815901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.815934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.816204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.816237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.816525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.816561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.816745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.816778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.817018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.817051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.817179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.817219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.817424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.817458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.817721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.817755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.817930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.817963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.818231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.818264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.818446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.818480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.818696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.818729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.818996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.819030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.819208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.819241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.819392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.819427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.819609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.819643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.819848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.819882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.820170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.820203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.820328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.820361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.820650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.820686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.820930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.820963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.821164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.821198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.821466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.821501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.821693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.821727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.821966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.821999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.822287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.760 [2024-10-08 18:36:21.822321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.760 qpair failed and we were unable to recover it. 00:28:28.760 [2024-10-08 18:36:21.822590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.822624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.822838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.822872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.823144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.823181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.823430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.823465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.823653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.823686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.823986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.824021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.824302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.824335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.824628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.824662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.824931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.824965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.825232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.825265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.825533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.825568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.825809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.825843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.826022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.826055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.826176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.826209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.826481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.826516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.826697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.826730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.826976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.827009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.827299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.827331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.827616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.827651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.827921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.827960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.828245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.828278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.828464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.828499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.828732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.828766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.828955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.828988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.829244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.829277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.829518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.829553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.829839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.829872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.830141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.830175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.830466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.830504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.830763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.830796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.831092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.831126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.831350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.831393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.831640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.831674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.831967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.832001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.832196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.832229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.832472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.832507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.832771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.832806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.833044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.833077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.833264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.833297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.833544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.761 [2024-10-08 18:36:21.833580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.761 qpair failed and we were unable to recover it. 00:28:28.761 [2024-10-08 18:36:21.833883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.833917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.834192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.834226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.834508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.834544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.834817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.834850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.835041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.835075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.835274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.835307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.835622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.835657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.835899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.835934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.836216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.836249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.836372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.836415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.836609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.836642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.836909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.836943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.837066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.837100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.837364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.837419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.837666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.837700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.837910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.837944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.838211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.838244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.838538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.838574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.838857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.838890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.839087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.839127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.839350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.839393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.839689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.839723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.840001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.840033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.840275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.840308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.840485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.840520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.840742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.840776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.841040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.841073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.841355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.841398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.841667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.841700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.841981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.842015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.842267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.842300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.842497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.842531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.842708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.842742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.842929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.842963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.843208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.843240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.762 qpair failed and we were unable to recover it. 00:28:28.762 [2024-10-08 18:36:21.843532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.762 [2024-10-08 18:36:21.843566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.843867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.843913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.844105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.844139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.844390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.844425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.844674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.844707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.844894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.844927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.845200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.845234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.845410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.845444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.845639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.845671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.845912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.845945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.846121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.846155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.846300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.846334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.846616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.846651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.846845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.846878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.847134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.847167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.847416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.847452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.847631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.847664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.847878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.847911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.848186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.848220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.848502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.848537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.848728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.848760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.849005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.849038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.849284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.849317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.849533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.849568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.849746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.849785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.849962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.849995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.850171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.850203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.850402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.850436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.850677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.850709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.850886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.850920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.851112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.851145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.851401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.851435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.851624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.851657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.851930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.851962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.852150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.852183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.852451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.852485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.852772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.852807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.852927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.852960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.853234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.853267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.853549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.763 [2024-10-08 18:36:21.853584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.763 qpair failed and we were unable to recover it. 00:28:28.763 [2024-10-08 18:36:21.853810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.853843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.854086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.854119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.854373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.854420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.854695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.854728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.854950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.854982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.855178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.855212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.855408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.855442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.855687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.855721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.855845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.855878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.856003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.856036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.856220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.856254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.856505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.856541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.856836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.856869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.857132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.857166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.857355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.857415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.857594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.857628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.857871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.857905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.858087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.858120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.858398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.858434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.858633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.858668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.858930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.858963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.859077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.859111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.859288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.859322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.859510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.859545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.859794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.859833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.860076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.860110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.860299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.860333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.860590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.860624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.860916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.860950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.861156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.861189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.861386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.861421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.861677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.861710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.861983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.862017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.862196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.862230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.862501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.862536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.862736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.862769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.863090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.863123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.863319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.863352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.863565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.863600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.863867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.764 [2024-10-08 18:36:21.863901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.764 qpair failed and we were unable to recover it. 00:28:28.764 [2024-10-08 18:36:21.864159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.864191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.864447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.864482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.864687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.864720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.864994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.865028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.865261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.865295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.865484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.865519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.865814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.865847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.866136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.866170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.866360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.866405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.866655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.866689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.866884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.866917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.867140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.867174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.867385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.867420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.867667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.867701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.867897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.867931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.868177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.868211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.868399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.868435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.868656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.868690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.868940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.868974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.869223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.869257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.869477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.869512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.869755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.869789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.870047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.870081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.870354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.870396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.870675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.870716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.870982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.871015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.871297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.871330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.871531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.871567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.871763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.871796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.871977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.872010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.872193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.872227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.872514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.872550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.872810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.872844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.873120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.873153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.873439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.873474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.873720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.873754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.873962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.873996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.874244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.874278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.874578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.874614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.874872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.765 [2024-10-08 18:36:21.874905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.765 qpair failed and we were unable to recover it. 00:28:28.765 [2024-10-08 18:36:21.875116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.875150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.875327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.875361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.875644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.875678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.875792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.875826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.876014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.876048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.876327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.876361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.876664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.876700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.876889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.876924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.877170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.877203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.877422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.877458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.877648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.877681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.877985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.878020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.878278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.878312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.878574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.878609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.878905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.878940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.879180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.879214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.879339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.879372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.879570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.879604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.879872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.879906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.880099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.880132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.880329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.880363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.880569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.880603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.880875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.880909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.881163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.881197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.881400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.881442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.881698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.881731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.881924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.881958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.882157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.882190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.882438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.882473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.882742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.882776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.882981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.883016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.883227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.883260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.883455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.883490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.883760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.883795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.883973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.884007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.884210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.884244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.884545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.884581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.884782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.884816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.885044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.885078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.766 [2024-10-08 18:36:21.885394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.766 [2024-10-08 18:36:21.885429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.766 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.885653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.885687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.886000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.886033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.886308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.886342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.886541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.886575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.886764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.886798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.887073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.887107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.887407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.887442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.887644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.887679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.887818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.887852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.888059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.888092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.888289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.888323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.888612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.888648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.888826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.888860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.889053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.889087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.889364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.889407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.889588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.889623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.889805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.889839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.890038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.890071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.890254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.890288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.890564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.890599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.890812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.890846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.891054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.891087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.891278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.891312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.891618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.891653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.891926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.891967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.892242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.892276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.892554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.892589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.892868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.892901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.893163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.893197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.893440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.893476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.893664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.893696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.893946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.893980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.894184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.894217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.894358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.894403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.894593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.894628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.894909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.767 [2024-10-08 18:36:21.894956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.767 qpair failed and we were unable to recover it. 00:28:28.767 [2024-10-08 18:36:21.895105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.895139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.895337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.895372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.895564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.895598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.895809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.895843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.896123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.896157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.896441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.896478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.896791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.896825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.897119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.897153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.897427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.897463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.897744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.897778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.897983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.898018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.898147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.898181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.898435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.898471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.898767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.898802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.899065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.899099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.899300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.899335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.899526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.899561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.899764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.899798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.900072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.900106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.900326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.900360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.900629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.900664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.900938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.900972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.901256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.901290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.901434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.901471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.901657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.901690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.901990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.902024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.902307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.902341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.902619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.902655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.902963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.903004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.903262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.903296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.903514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.903549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.903759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.903794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.904050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.904084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.904337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.904371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.904681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.904716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.904971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.905004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.905221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.905255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.905460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.905496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.905658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.905693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.768 [2024-10-08 18:36:21.905892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.768 [2024-10-08 18:36:21.905926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.768 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.906131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.906166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.906447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.906483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.906763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.906799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.906924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.906957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.907160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.907194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.907512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.907549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.907690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.907725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.907979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.908013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.908231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.908266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.908551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.908586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.908732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.908766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.908991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.909025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.909282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.909316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.909618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.909654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.909936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.909971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.910112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.910148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.910344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.910393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.910589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.910624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.910762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.910796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.911055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.911090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.911237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.911272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.911549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.911586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.911839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.911874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.912094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.912129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.912312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.912347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.912637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.912670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.912923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.912954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.913222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.913255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.913457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.913497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.913809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.913841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.914108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.914141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.914441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.914476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.914730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.914762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.915020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.915056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.915357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.915403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.915685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.915719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.915902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.915935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.916152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.916185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.916465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.916499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.769 [2024-10-08 18:36:21.916783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.769 [2024-10-08 18:36:21.916816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.769 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.917019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.917051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.917339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.917373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.917671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.917705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.917974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.918007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.918217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.918249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.918558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.918592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.918869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.918902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.919110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.919142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.919443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.919477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.919684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.919717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.919994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.920027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.920226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.920258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.920538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.920571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.920877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.920910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.921173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.921205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.921504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.921539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.921762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.921794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.922011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.922046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.922267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.922299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.922576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.922610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.922805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.922838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.923120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.923155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.923463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.923497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.923748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.923782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.924038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.924071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.924392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.924428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.924703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.924736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.925007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.925040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.925293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.925332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.925574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.925609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.925797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.925829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.926044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.926077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.926205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.926237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.926471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.926506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.926785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.926818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.927099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.927132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.927417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.927452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.927753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.927786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.927997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.928030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.770 qpair failed and we were unable to recover it. 00:28:28.770 [2024-10-08 18:36:21.928221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.770 [2024-10-08 18:36:21.928254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.928503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.928537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.928839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.928873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.929099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.929132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.929336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.929369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.929588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.929622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.929851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.929884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.930138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.930170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.930431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.930465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.930678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.930711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.930949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.930982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.931175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.931207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.931466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.931500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.931805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.931838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.932099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.932132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.932355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.932399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.932725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.932759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.933041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.933073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.933312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.933345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.933656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.933690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.933962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.933995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.934288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.934320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.934622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.934660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.934940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.934973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.935119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.935151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.935452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.935487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.935772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.935805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.936059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.936091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.936409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.936443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.936723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.936764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.936968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.937002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.937209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.937242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.937521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.937556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.937807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.937841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.938088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.938122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.938266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.771 [2024-10-08 18:36:21.938299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.771 qpair failed and we were unable to recover it. 00:28:28.771 [2024-10-08 18:36:21.938518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.938553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.938824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.938857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.939120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.939153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.939410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.939445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.939642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.939675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.939856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.939889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.940170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.940204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.940421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.940455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.940654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.940687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.940887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.940920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.941188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.941221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.941372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.941431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.941715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.941748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.941947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.941980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.942238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.942271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.942473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.942507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.942802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.942834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.943038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.943070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.943294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.943327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.943524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.943558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.943721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.943754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.943973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.944006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.944226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.944260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.944542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.944577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.944880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.944913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.945116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.945149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.945285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.945318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.945600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.945634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.945755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.945788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.946068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.946101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.946310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.946343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.946635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.946669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.946852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.946884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.947078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.947116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.947397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.947432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.947738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.947772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.948032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.948065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.948318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.948351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.948591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.948626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.772 [2024-10-08 18:36:21.948811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.772 [2024-10-08 18:36:21.948844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.772 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.949122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.949155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.949448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.949482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.949681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.949715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.949919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.949951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.950081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.950113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.950347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.950389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.950579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.950613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.950896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.950930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.951111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.951143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.951413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.951447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.951659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.951692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.951971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.952003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.952254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.952287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.952488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.952523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.952781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.952814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.953105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.953138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.953436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.953471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.953737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.953770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.954041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.954075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.954301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.954333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.954624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.954664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.954867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.954900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.955087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.955121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.955322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.955355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.955561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.955595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.955798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.955830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.956115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.956148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.956451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.956485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.956749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.956781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.957038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.957071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.957354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.957401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.957696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.957729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.957924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.957957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.958214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.958246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.958486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.958522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.958779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.958812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.959114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.959147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.959442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.959477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.773 [2024-10-08 18:36:21.959676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.773 [2024-10-08 18:36:21.959709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.773 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.959918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.959950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.960152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.960185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.960463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.960498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.960725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.960759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.960964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.960996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.961191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.961224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.961427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.961460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.961744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.961777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.961991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.962024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.962300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.962333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.962627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.962662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.962953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.962986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.963276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.963309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.963588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.963622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.963770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.963802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.964008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.964042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.964319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.964351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.964559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.964593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.964731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.964764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.964964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.964997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.965277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.965310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.965581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.965621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.965839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.965872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.966070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.966104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.966248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.966280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.966557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.966591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.966846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.966878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.967077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.967110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.967400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.967434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.967691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.967724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.968020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.968053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.968240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.968273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.968537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.968572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.968764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.968798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.969080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.969113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.969310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.969344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.969555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.969589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.969871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.969903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.970209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.970242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.774 qpair failed and we were unable to recover it. 00:28:28.774 [2024-10-08 18:36:21.970452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.774 [2024-10-08 18:36:21.970487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.970757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.970790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.970982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.971016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.971211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.971244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.971431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.971466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.971718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.971751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.972036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.972068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.972347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.972407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.972719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.972752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.972977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.973010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.973299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.973332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.973634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.973668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.973980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.974013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.974288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.974320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.974528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.974562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.974823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.974857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.975116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.975149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.975358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.975404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.975665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.975699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.975812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.975844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.976127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.976160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.976430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.976464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.976731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.976770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.976969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.977001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.977294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.977328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.977606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.977640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.977924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.977957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.978212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.978245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.978554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.978589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.978874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.978907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.979127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.979160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.979415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.979450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.979755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.979789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.980090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.980122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.980399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.980434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.980644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.980677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.980841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.775 [2024-10-08 18:36:21.980874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.775 qpair failed and we were unable to recover it. 00:28:28.775 [2024-10-08 18:36:21.981128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.981161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.981354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.981397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.981583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.981616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.981747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.981781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.982073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.982105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.982423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.982457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.982761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.982794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.983078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.983112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.983397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.983431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.983713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.983746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.984024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.984057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.984291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.984325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.984649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.984683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.984889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.984922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.985049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.985081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.985363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.985407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.985675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.985708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.985892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.985924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.986200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.986233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.986467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.986502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.986778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.986809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.987035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.987067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.987300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.987333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.987614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.987648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.987841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.987874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.988153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.988192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.988479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.988514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.988767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.988801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.989074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.989107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.989395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.989430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.989711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.989749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.990016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.990049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.990346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.990398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.990679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.990711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.990926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.990959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.991158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.991191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.991475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.991510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.991805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.991838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.776 [2024-10-08 18:36:21.992107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.776 [2024-10-08 18:36:21.992139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.776 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.992391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.992426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.992732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.992765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.993044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.993077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.993201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.993233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.993515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.993550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.993831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.993866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.994102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.994134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.994396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.994430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.994573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.994607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.994916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.994949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.995229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.995261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.995548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.995583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.995858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.995892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.996201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.996235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.996441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.996475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.996769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.996802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.996950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.996982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.997193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.997227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.997427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.997460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.997642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.997674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.997951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.997985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.998256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.998290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.998587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.998622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.998908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.998941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.999244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.999276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.999512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.999547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:21.999687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:21.999732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:22.000037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:22.000070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:22.000325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:22.000358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:22.000669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:22.000703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:28.777 [2024-10-08 18:36:22.000980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.777 [2024-10-08 18:36:22.001013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:28.777 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.001253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.001286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.001482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.001519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.001734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.001768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.002043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.002077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.002281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.002315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.002541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.002576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.002852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.002886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.003146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.003180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.003496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.003531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.003713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.003748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.003949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.003982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.004131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.004163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.004356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.004403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.004555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.004588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.004868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.004901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.005170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.005204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.005348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.005390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.005521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.005554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.005819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.005852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.006144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.006177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.006418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.086 [2024-10-08 18:36:22.006453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.086 qpair failed and we were unable to recover it. 00:28:29.086 [2024-10-08 18:36:22.006641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.006674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.006862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.006896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.007034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.007067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.007204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.007238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.007398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.007432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.007581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.007615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.007769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.007801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.008073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.008106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.008401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.008434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.008629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.008662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.008922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.008955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.009162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.009194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.009459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.009492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.009696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.009729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.009943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.009982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.010257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.010290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.010573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.010607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.010817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.010851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.011131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.011164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.011452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.011487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.011764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.011797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.012105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.012138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.012369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.012418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.012574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.012607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.012798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.012830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.013036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.013070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.013208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.013240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.013498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.013533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.013805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.013838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.014101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.014135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.014248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.014280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.014486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.014521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.014728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.014760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.015028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.015062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.015333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.015366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.015663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.015697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.015903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.015936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.016160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.016194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.016404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.087 [2024-10-08 18:36:22.016438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.087 qpair failed and we were unable to recover it. 00:28:29.087 [2024-10-08 18:36:22.016620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.016653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.016857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.016890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.017178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.017212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.017424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.017459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.017643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.017675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.017864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.017897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.018083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.018116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.018314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.018347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.018637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.018672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.018858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.018891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.019148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.019182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.019484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.019518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.019787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.019820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.020022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.020055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.020310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.020343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.020564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.020604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.020870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.020902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.021153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.021186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.021488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.021523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.021791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.021823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.022079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.022112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.022400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.022434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.022757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.022790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.023075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.023111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.023427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.023462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.023764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.023797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.024011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.024044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.024224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.024256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.024484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.024518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.024785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.024819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.024952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.024985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.025204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.025237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.025399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.025433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.025716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.025749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.025964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.025997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.026218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.026251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.026536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.026571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.026769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.026802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.088 [2024-10-08 18:36:22.026992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.088 [2024-10-08 18:36:22.027025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.088 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.027219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.027251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.027454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.027488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.027621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.027654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.027865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.027899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.028128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.028161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.028414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.028448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.028724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.028757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.028963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.028997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.029253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.029286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.029592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.029626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.029893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.029925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.030182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.030216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.030523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.030558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.030755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.030787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.031027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.031060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.031342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.031392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.031647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.031686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.031949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.031982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.032109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.032140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.032330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.032364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.032584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.032618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.032913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.032945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.033216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.033248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.033509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.033544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.033736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.033768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.033962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.033995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.034178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.034211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.034492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.034527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.034792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.034825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.035031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.035065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.035211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.035244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.035455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.035490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.035702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.035735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.035924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.035957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.036153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.036186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.036469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.036503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.036707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.036740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.036993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.037026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.037327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.037360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.037637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.089 [2024-10-08 18:36:22.037671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.089 qpair failed and we were unable to recover it. 00:28:29.089 [2024-10-08 18:36:22.037926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.037958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.038239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.038272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.038562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.038600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.038799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.038833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.039041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.039074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.039290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.039323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.039522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.039557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.039862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.039895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.040125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.040158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.040439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.040474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.040725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.040758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.040890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.040923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.041127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.041161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.041413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.041448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.041645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.041678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.041881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.041915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.042203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.042241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.042477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.042511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.042740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.042773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.043002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.043035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.043232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.043265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.043446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.043480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.043684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.043716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.043913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.043947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.044228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.044262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.044471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.044505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.044617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.044650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.044946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.044980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.045208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.045242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.045526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.045561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.045841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.045874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.046157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.046191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.046312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.046345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.046649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.046683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.090 [2024-10-08 18:36:22.046875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.090 [2024-10-08 18:36:22.046908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.090 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.047158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.047192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.047399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.047433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.047711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.047744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.047999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.048033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.048294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.048328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.048627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.048661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.048791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.048824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.049046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.049079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.049400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.049436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.049720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.049753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.050026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.050059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.050355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.050399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.050613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.050647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.050852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.050885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.051116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.051149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.051402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.051437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.051701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.051736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.052035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.052069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.052329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.052362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.052596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.052630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.052886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.052920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.053212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.053256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.053474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.053508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.053798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.053831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.054034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.054067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.054283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.054316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.054597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.054631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.054910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.054944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.055179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.055213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.055511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.055546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.055728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.055761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.055963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.055996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.056270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.056303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.056581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.056615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.056904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.056938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.057240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.057274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.057540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.057574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.057872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.057905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.058121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.058154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.091 [2024-10-08 18:36:22.058372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.091 [2024-10-08 18:36:22.058416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.091 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.058630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.058664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.058844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.058876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.059104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.059137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.059354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.059411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.059683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.059717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.059988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.060021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.060275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.060308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.060561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.060596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.060909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.060943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.061216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.061249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.061448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.061483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.061686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.061720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.062027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.062061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.062266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.062300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.062582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.062616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.062919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.062953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.063151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.063185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.063439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.063474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.063752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.063786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.064061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.064094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.064306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.064339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.064557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.064597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.064802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.064835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.065110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.065143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.065431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.065467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.065664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.065697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.065973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.066006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.066145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.066177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.066414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.066449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.066579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.066613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.066815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.066847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.067119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.067152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.067352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.067395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.067651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.067685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.067902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.067935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.068217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.068251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.068510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.068545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.068827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.068860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.069058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.092 [2024-10-08 18:36:22.069091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.092 qpair failed and we were unable to recover it. 00:28:29.092 [2024-10-08 18:36:22.069373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.069419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.069629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.069663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.069859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.069892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.070162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.070196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.070476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.070511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.070798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.070832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.071108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.071141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.071429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.071464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.071741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.071773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.071979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.072013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.072278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.072311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.072511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.072546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.072768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.072801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.073073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.073106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.073340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.073373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.073687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.073720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.074003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.074037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.074305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.074339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.074638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.074673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.074930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.074964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.075111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.075143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.075342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.075387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.075575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.075615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.075894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.075927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.076109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.076142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.076350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.076395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.076678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.076711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.077004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.077037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.077312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.077344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.077640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.077674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.077820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.077853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.078127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.078160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.078441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.078475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.078677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.078710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.078965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.078997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.079178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.079211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.079424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.079459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.079715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.079748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.080010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.080043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.080264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.080297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.093 [2024-10-08 18:36:22.080504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.093 [2024-10-08 18:36:22.080538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.093 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.080742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.080775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.080970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.081003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.081278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.081312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.081524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.081559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.081851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.081885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.082158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.082191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.082479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.082514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.082789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.082822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.083112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.083146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.083430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.083464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.083744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.083778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.084024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.084057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.084289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.084323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.084653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.084688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.084968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.085002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.085281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.085313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.085476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.085510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.085735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.085769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.085975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.086008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.086241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.086275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.086561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.086596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.086902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.086941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.087218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.087251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.087365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.087410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.087672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.087706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.087980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.088013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.088300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.088333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.088620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.088654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.088845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.088878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.089139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.089172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.089475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.089510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.089741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.089775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.089998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.090031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.090253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.090286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.090585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.090621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.090888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.090921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.091133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.091167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.091430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.091464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.091769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.091802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.094 qpair failed and we were unable to recover it. 00:28:29.094 [2024-10-08 18:36:22.092006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.094 [2024-10-08 18:36:22.092039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.092319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.092353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.092637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.092671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.092951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.092984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.093271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.093304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.093534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.093568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.093790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.093823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.094017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.094050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.094303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.094336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.094555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.094591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.094869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.094902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.095162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.095195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.095469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.095504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.095793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.095827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.096119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.096152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.096424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.096459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.096746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.096779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.097064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.097097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.097407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.097442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.097717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.097750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.098012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.098045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.098351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.098391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.098507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.098540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.098823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.098857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.099107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.099140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.099339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.099374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.099585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.099618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.099820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.099852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.100128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.100161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.100439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.100475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.100764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.100796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.101049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.101083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.101214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.101247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.101461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.101496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.101703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.101736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.101957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.095 [2024-10-08 18:36:22.101990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.095 qpair failed and we were unable to recover it. 00:28:29.095 [2024-10-08 18:36:22.102281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.102314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.102542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.102577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.102907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.102938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.103217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.103250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.103482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.103516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.103709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.103743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.103943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.103976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.104131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.104164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.104400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.104435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.104698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.104731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.104925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.104958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.105160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.105193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.105450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.105485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.105707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.105747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.105940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.105973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.106168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.106201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.106405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.106439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.106634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.106668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.106927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.106959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.107216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.107249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.107524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.107558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.107766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.107800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.107994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.108027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.108220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.108254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.108508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.108541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.108853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.108886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.109139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.109172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.109454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.109489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.109770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.109804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.109956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.109989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.110175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.110207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.110467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.110501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.110659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.110692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.110949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.110982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.111285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.111318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.111593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.111628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.111823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.111855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.112038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.112072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.096 [2024-10-08 18:36:22.112345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.096 [2024-10-08 18:36:22.112389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.096 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.112594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.112627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.112825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.112858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.113140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.113174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.113329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.113361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.113629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.113662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.113916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.113949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.114210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.114243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.114458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.114492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.114752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.114786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.114981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.115013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.115205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.115239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.115448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.115483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.115691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.115723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.115951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.115983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.116184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.116224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.116431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.116465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.116741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.116774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.116976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.117010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.117196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.117230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.117504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.117538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.117820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.117853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.118166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.118198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.118503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.118538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.118721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.118755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.119013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.119046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.119329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.119362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.119652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.119686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.119900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.119932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.120123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.120156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.120302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.120335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.120610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.120645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.120906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.120938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.121237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.121271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.121548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.121582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.121839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.121872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.121987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.122020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.122220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.122252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.122532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.122567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.122698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.097 [2024-10-08 18:36:22.122731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.097 qpair failed and we were unable to recover it. 00:28:29.097 [2024-10-08 18:36:22.123010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.123043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.123313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.123348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.123654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.123688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.123977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.124011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.124282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.124315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.124628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.124662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.124936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.124970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.125252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.125286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.125570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.125604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.125832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.125865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.126064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.126097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.126406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.126441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.126661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.126694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.126957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.126990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.127193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.127225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.127498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.127539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.127826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.127859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.128131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.128164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.128430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.128464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.128769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.128802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.128987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.129020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.129288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.129321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.129659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.129694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.129977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.130011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.130288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.130319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.130547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.130582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.130862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.130895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.131126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.131159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.131283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.131316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.131617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.131653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.131898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.131931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.132160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.132193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.132400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.132434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.132633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.132666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.132938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.132971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.133275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.133309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.133570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.133605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.133894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.133926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.134205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.098 [2024-10-08 18:36:22.134237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.098 qpair failed and we were unable to recover it. 00:28:29.098 [2024-10-08 18:36:22.134451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.134487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.134764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.134797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.135051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.135084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.135363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.135407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.135707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.135741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.136022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.136055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.136311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.136344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.136649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.136683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.136942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.136976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.137280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.137312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.137521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.137556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.137760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.137793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.138074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.138108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.138307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.138339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.138548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.138582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.138812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.138844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.139156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.139196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.139451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.139487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.139775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.139808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.140015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.140048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.140188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.140222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.140444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.140478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.140732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.140765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.141021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.141054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.141278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.141311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.141509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.141543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.141798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.141832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.142037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.142070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.142358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.142411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.142692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.142727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.142991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.143024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.143285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.143317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.143612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.143648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.099 qpair failed and we were unable to recover it. 00:28:29.099 [2024-10-08 18:36:22.143791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.099 [2024-10-08 18:36:22.143823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.144047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.144080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.144290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.144323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.144561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.144595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.144807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.144841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.145119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.145152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.145347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.145399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.145601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.145635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.145834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.145867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.146122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.146155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.146431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.146466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.146749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.146783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.147089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.147121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.147394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.147428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.147628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.147660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.147975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.148007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.148253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.148286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.148482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.148517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.148744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.148777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.149083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.149116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.149402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.149436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.149657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.149690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.149962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.149994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.150299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.150338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.150547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.150581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.150802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.150834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.151025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.151059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.151258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.151291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.151571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.151606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.151908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.151941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.152147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.152180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.152390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.152425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.152674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.152707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.152894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.152927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.153120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.153153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.153415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.153449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.153721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.153754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.100 [2024-10-08 18:36:22.154017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.100 [2024-10-08 18:36:22.154049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.100 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.154371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.154415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.154698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.154731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.155021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.155054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.155351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.155394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.155659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.155692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.155879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.155913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.156194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.156227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.156491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.156526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.156752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.156786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.157043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.157076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.157373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.157416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.157622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.157656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.157883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.157917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.158120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.158154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.158339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.158372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.158600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.158633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.158835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.158867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.159019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.159052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.159275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.159308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.159516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.159549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.159751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.159784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.159985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.160019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.160292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.160325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.160616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.160650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.160924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.160956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.161144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.161183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.161448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.161482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.161740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.161773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.161960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.161992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.162277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.162311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.162449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.162482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.162762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.162794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.163066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.163098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.163310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.163344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.163491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.163525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.163804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.163837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.164138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.164170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.164397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.164433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.164687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.101 [2024-10-08 18:36:22.164720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.101 qpair failed and we were unable to recover it. 00:28:29.101 [2024-10-08 18:36:22.164959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.164993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.165268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.165301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.165594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.165630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.165902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.165935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.166218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.166251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.166539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.166572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.166793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.166826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.167132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.167165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.167447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.167481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.167616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.167648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.167853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.167887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.168167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.168199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.168497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.168532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.168752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.168785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.168994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.169029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.169222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.169255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.169439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.169475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.169686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.169719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.169996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.170030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.170275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.170308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.170492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.170527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.170783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.170815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.171015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.171051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.171198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.171231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.171483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.171518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.171704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.171737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.172012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.172051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.172333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.172365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.172594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.172627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.172880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.172912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.173234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.173269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.173546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.173581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.173803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.173837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.174082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.174115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.174323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.174357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.174671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.174705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.174912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.174945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.175161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.175193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.175455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.175490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.102 qpair failed and we were unable to recover it. 00:28:29.102 [2024-10-08 18:36:22.175766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.102 [2024-10-08 18:36:22.175799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.176107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.176141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.176402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.176437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.176668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.176704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.176984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.177017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.177270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.177303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.177495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.177532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.177680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.177713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.177894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.177927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.178217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.178250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.178459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.178494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.178798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.178832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.179098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.179132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.179334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.179367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.179532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.179567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.179824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.179859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.180163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.180197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.180481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.180515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.180697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.180731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.180879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.180913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.181163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.181197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.181403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.181438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.181711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.181744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.181953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.181988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.182171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.182204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.182400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.182435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.182633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.182665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.182823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.182863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.183077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.183110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.183309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.183342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.183649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.183684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.183971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.184005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.184312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.184346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.184659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.184693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.184953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.184986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.185267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.185302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.185528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.185562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.185837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.185871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.186136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.186169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.103 [2024-10-08 18:36:22.186398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.103 [2024-10-08 18:36:22.186433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.103 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.186687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.186721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.186927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.186962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.187213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.187246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.187416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.187451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.187642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.187676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.187954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.187988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.188275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.188308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.188637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.188672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.188884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.188918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.189175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.189209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.189418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.189452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.189663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.189696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.189881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.189914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.190097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.190130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.190354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.190398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.190586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.190621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.190903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.190936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.191161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.191194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.191450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.191485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.191651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.191684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.191983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.192015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.192159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.192193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.192475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.192509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.192787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.192820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.193109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.193143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.193420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.193455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.193584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.193617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.193869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.193908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.194118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.194151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.194352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.194395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.194678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.194711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.194935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.194967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.195159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.195193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.195319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.195352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.195515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.104 [2024-10-08 18:36:22.195549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.104 qpair failed and we were unable to recover it. 00:28:29.104 [2024-10-08 18:36:22.195794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.195827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.195964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.196001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.196206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.196238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.196444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.196478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.196666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.196700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.196901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.196933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.197165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.197200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.197457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.197492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.197694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.197727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.197928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.197960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.198165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.198198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.198403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.198437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.198643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.198677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.198799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.198832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.199021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.199053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.199250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.199282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.199445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.199480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.199706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.199739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.199947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.199980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.200239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.200273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.200557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.200593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.200818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.200851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.201107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.201140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.201272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.201305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.201577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.201612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.201809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.201841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.202037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.202070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.202323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.202355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.202569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.202603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.202815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.202848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.203055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.203088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.203284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.203316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.203529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.203570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.203831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.203864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.204005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.204038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.204298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.204331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.204630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.204666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.204945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.204978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.205169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.205202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.205478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.105 [2024-10-08 18:36:22.205513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.105 qpair failed and we were unable to recover it. 00:28:29.105 [2024-10-08 18:36:22.205724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.205757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.206014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.206048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.206259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.206291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.206580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.206615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.206809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.206843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.207089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.207173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.207369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.207423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.207715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.207753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.208058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.208096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.208404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.208453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.208610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.208645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.208910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.208946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.209111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.209148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.209401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.209438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.209663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.209703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.209843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.209878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.210037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.210078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.210296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.210332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.210557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0fbb0 is same with the state(6) to be set 00:28:29.106 [2024-10-08 18:36:22.210783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.210860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.211096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.211134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.211342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.211392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.211606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.211640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.211786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.211820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.212021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.212054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.212309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.212341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.212500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.212534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.212678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.212710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.212894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.212926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.213201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.213235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.213451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.213486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.213731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.213764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.214054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.214087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.214292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.214325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.214585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.214620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.214816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.214849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.215048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.215081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.215329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.215362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.215669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.215703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.215966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.215998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.106 [2024-10-08 18:36:22.216292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.106 [2024-10-08 18:36:22.216324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.106 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.216562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.216596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.216878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.216911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.217195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.217227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.217533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.217568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.217771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.217803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.218080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.218119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.218317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.218349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.218568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.218602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.218812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.218843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.219047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.219080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.219332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.219365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.219668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.219701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.219915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.219948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.220222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.220254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.220525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.220558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.220841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.220875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.221077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.221110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.221373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.221425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.221709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.221742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.222031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.222063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.222340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.222373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.222576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.222609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.222789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.222821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.223035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.223067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.223343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.223385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.223683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.223716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.223965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.223998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.224312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.224346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.224521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.224563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.224764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.224797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.225107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.225141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.225427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.225463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.225675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.225708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.225991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.226024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.226322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.226356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.226626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.226659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.226879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.226913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.227186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.227219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.227416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.107 [2024-10-08 18:36:22.227450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.107 qpair failed and we were unable to recover it. 00:28:29.107 [2024-10-08 18:36:22.227725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.227759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.227974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.228007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.228285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.228318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.228606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.228641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.228864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.228896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.229122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.229155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.229359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.229410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.229560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.229593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.229847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.229880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.230071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.230105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.230309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.230342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.230633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.230666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.230941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.230975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.231190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.231222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.231471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.231505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.231807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.231841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.232103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.232136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.232397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.232431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.232732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.232766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.233078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.233111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.233347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.233388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.233623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.233657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.233860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.233893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.234194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.234227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.234416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.234451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.234703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.234736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.234951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.234984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.235237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.235271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.235535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.235570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.235768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.235801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.236067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.236099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.236364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.108 [2024-10-08 18:36:22.236405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.108 qpair failed and we were unable to recover it. 00:28:29.108 [2024-10-08 18:36:22.236622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.236655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.237000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.237033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.237309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.237342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.237635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.237685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.237989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.238025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.238310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.238345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.238717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.238753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.239055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.239089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.239392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.239427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.239686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.239720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.240016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.240049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.240248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.240281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.240476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.240510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.240721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.240755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.241042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.241081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.241349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.241389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.241695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.241729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.241870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.241903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.242101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.242133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.242396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.242431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.242706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.242738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.243015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.243048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.243334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.243367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.243512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.243545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.243818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.243852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.244055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.244089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.244234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.244266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.244547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.244580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.244744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.244778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.244915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.244947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.245129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.245162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.245313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.245347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.245591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.245668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.245989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.246039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.246256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.246289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.246526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.246563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.246766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.246800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.247088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.247123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.247284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.247320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.109 [2024-10-08 18:36:22.247656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.109 [2024-10-08 18:36:22.247699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.109 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.247924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.247958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.248170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.248210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.248515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.248549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.248830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.248864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.249118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.249151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.249350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.249390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.249659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.249693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.249998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.250031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.250293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.250326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.250557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.250592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.250823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.250856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.251039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.251071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.251353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.251393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.251672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.251705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.251984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.252023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.252302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.252336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.252555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.252589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.252840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.252873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.253180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.253213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.253418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.253451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.253730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.253763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.254050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.254083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.254337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.254369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.254679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.254712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.255008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.255041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.255245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.255277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.255459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.255494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.255706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.255739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.256002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.256036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.256302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.256335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.256540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.256574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.256836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.256868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.257141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.257175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.257408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.257442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.257720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.257753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.258006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.110 [2024-10-08 18:36:22.258040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.110 qpair failed and we were unable to recover it. 00:28:29.110 [2024-10-08 18:36:22.258302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.258336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.258638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.258672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.258812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.258845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.259151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.259185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.259344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.259385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.259673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.259716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.259934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.259977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.260261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.260295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.260605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.260643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.260952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.260989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.261275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.261319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.261570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.261611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.261815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.261850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.262060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.262096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.262422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.262460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.262731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.262773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.263054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.263095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.263360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.263414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.263685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.263729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.263975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.264007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.264207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.264240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.264401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.264435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.264709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.264741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.265012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.265045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.265201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.265235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.265491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.265525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.265784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.265818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.266070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.266104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.266302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.266335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.266606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.266641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.266856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.266890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.267174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.267207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.267514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.267548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.267745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.267778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.268011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.268043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.268346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.268389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.268673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.268705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.268980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.269012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.269286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.269317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.269612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.111 [2024-10-08 18:36:22.269646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.111 qpair failed and we were unable to recover it. 00:28:29.111 [2024-10-08 18:36:22.269829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.269862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.270161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.270192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.270405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.270441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.270718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.270750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.270985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.271016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.271211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.271244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.271525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.271560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.271709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.271741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.272023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.272055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.272198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.272230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.272501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.272534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.272732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.272765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.273059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.273092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.273340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.273372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.273593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.273626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.273765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.273797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.273983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.274014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.274236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.274269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.274480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.274521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.274776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.274808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.275002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.275035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.275257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.275289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.275538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.275573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.275775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.275808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.276000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.276032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.276308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.276341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.276508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.276542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.276827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.276860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.277139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.277172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.277438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.277473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.277703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.277736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.277938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.277971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.278234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.278268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.278457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.278491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.112 [2024-10-08 18:36:22.278651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.112 [2024-10-08 18:36:22.278683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.112 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.278873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.278906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.279045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.279077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.279266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.279299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.279599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.279633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.279914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.279946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.280142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.280173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.280435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.280470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.280747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.280780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.281035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.281067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.281255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.281288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.281514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.281550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.281753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.281786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.282065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.282098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.282242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.282274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.282463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.282497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.282723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.282757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.283035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.283068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.283351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.283399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.283607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.283639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.283922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.283954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.284101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.284133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.284431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.284466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.284752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.284784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.285088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.285127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.285416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.285451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.285716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.285749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.286036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.286069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.286351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.286389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.286698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.286730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.286951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.286983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.287245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.287277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.287564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.287598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.287893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.287926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.288200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.288232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.288547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.288581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.288831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.288863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.289178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.113 [2024-10-08 18:36:22.289210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.113 qpair failed and we were unable to recover it. 00:28:29.113 [2024-10-08 18:36:22.289504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.289539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.289811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.289844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.290121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.290154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.290387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.290423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.290569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.290600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.290885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.290918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.291193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.291226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.291429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.291461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.291649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.291681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.291898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.291930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.292178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.292210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.292486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.292520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.292745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.292776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.293056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.293087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.293399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.293434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.293708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.293741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.294017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.294049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.294278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.294311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.294504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.294537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.294790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.294822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.295077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.295109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.295359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.295402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.295526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.295559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.295818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.295851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.296109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.296141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.296443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.296477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.296773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.296811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.297102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.297134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.297406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.297438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.297708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.297740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.298037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.298076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.298367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.298410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.298675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.298708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.298992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.299025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.299214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.299247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.299526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.299560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.299751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.299783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.299967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.299999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.114 [2024-10-08 18:36:22.300255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.114 [2024-10-08 18:36:22.300286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.114 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.300466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.300499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.300788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.300820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.301100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.301132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.301416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.301449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.301736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.301767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.301989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.302023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.302327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.302358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.302597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.302632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.302953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.302987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.303202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.303234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.303428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.303462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.303659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.303692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.303947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.303980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.304173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.304206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.304488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.304523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.304736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.304769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.304960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.304992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.305289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.305322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.305593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.305627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.305823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.305855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.306111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.306144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.306445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.306478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.306752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.306785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.306973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.307006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.307282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.307314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.307517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.307551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.307765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.307798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.307997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.308029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.308232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.308265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.308524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.308566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.308745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.308776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.309059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.309091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.309404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.309438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.309718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.309751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.310013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.310045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.310347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.310390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.310620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.115 [2024-10-08 18:36:22.310653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.115 qpair failed and we were unable to recover it. 00:28:29.115 [2024-10-08 18:36:22.310958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.310990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.311213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.311245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.311431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.311465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.311737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.311769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.312051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.312084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.312366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.312411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.312684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.312717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.312950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.312983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.313260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.313292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.313552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.313587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.313895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.313927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.314185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.314217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.314490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.314524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.314788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.314821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.315042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.315073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.315330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.315362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.315670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.315704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.315903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.315941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.316194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.316226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.316522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.316557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.316849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.316883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.317065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.317098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.317367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.317435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.317690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.317722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.317869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.317902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.318102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.318135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.318400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.318433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.318640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.318672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.318860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.318893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.319110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.319142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.319407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.319440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.319652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.319686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.319960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.319992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.320198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.320230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.320483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.320518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.320822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.320855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.321118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.321151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.321452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.321487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.321719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.321751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.322001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.116 [2024-10-08 18:36:22.322033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.116 qpair failed and we were unable to recover it. 00:28:29.116 [2024-10-08 18:36:22.322256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.322289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.322472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.322506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.322780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.322812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.323090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.323123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.323407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.323441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.323727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.323759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.324038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.324071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.324360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.324401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.324657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.324690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.324890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.324923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.325129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.325162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.325438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.325472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.325755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.325788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.325986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.326018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.326198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.326230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.326460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.326494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.326793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.326826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.327098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.327136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.327344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.327394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.327648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.327681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.327880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.327913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.328194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.328227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.328508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.328542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.328826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.328858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.329140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.329173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.329367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.329409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.329567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.329600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.329856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.329889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.330175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.330208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.330490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.330523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.330785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.330817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.331118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.331154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.331431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.331465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.331748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.331781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.332045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.332080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.117 qpair failed and we were unable to recover it. 00:28:29.117 [2024-10-08 18:36:22.332279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.117 [2024-10-08 18:36:22.332314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.332607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.332641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.332893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.332927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.333191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.333223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.333421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.333455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.333655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.333689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.333945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.333977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.334240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.334272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.334576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.334610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.334813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.334847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.335065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.335098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.335400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.335433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.335650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.335683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.335962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.335995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.336280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.336312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.336537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.336572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.336881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.336917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.337217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.337249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.337528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.337562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.337873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.337906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.338121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.338153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.338492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.338526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.338789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.338830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.339110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.339144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.339356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.339400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.339706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.339740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.340034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.340069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.340342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.340387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.340636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.340669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.340870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.340902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.341191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.341223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.341424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.341459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.341761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.341794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.342046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.342077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.342273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.342306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.342505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.342538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.342748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.342781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.343045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.118 [2024-10-08 18:36:22.343079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.118 qpair failed and we were unable to recover it. 00:28:29.118 [2024-10-08 18:36:22.343332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.343365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.343585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.343625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.343773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.343806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.344011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.344045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.344310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.344342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.344641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.344675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.344893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.344927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.345175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.345207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.345361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.345408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.345661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.345694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.345892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.345924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.346210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.346244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.346392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.346427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.346643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.346675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.346945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.346978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.347183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.347217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.347406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.347440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.347721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.347754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.347957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.347989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.348297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.348329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.348612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.348647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.348851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.348883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.349033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.349065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.349280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.349313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.349551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.349592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.349775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.349807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.350079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.350112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.350294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.350325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.350631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.350666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.350824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.350857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.351055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.351087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.351331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.351363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.351580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.351619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.351743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.351775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.351982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.352014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.352316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.352350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.352578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.352611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.352765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.352798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.119 [2024-10-08 18:36:22.353011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.119 [2024-10-08 18:36:22.353044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.119 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.353314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.353346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.353523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.353560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.353789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.353822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.354047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.354081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.354363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.354408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.354684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.354717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.354910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.354942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.355091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.355123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.355327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.355360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.355578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.355611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.355818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.355850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.356108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.356142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.356330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.356362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.356646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.356680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.356916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.356949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.357086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.357118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.357428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.357463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.357607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.357639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.357831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.357863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.358042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.358074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.358264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.358296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.358588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.358622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.358874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.358906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.359096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.359129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.359408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.359442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.359645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.359683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.359968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.360001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.360275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.360307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.360518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.360551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.360759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.360791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.360994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.361026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.361230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.361262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.361559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.361593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.361739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.361772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.361999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.362031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.362233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.362266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.362512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.362546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.120 qpair failed and we were unable to recover it. 00:28:29.120 [2024-10-08 18:36:22.362807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.120 [2024-10-08 18:36:22.362838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.363161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.363198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.363461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.363495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.363700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.363732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.363958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.363991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.364273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.364306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.364604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.364637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.364766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.364798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.365074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.365109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.365434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.365473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.365693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.365726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.365860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.365896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.366192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.366224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.366426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.366460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.366690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.366722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.366860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.366893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.367174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.367206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.367470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.367504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.367733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.367765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.367972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.368005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.368210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.368242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.368497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.368531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.368685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.368717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.368901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.368933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.369127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.369159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.369361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.369421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.369562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.369600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.369755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.369787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.369918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.369957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.370140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.370171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.370356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.370404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.370684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.370718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.370867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.370899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.371028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.371060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.371262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.121 [2024-10-08 18:36:22.371296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.121 qpair failed and we were unable to recover it. 00:28:29.121 [2024-10-08 18:36:22.371493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.371526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.371645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.371677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.371828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.371861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.372126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.372163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.372288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.372320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.372599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.372635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.372768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.372800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.373075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.373107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.373323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.373356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.373642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.373675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.373868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.373900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.374126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.374158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.374352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.374397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.374581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.374614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.374840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.374872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.375108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.375146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.375453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.375487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.375693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.375726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.376027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.376064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.376201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.376233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.376358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.376403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.376615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.376648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.376762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.376794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.377000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.377032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.377252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.377283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.377497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.377531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.377659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.377691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.377825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.377857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.378085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.378117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.378407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.378441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.378722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.378755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.379024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.379056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.379256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.379289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.379495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.379535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.379726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.379757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.380027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.380060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.380261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.380294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.380490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.122 [2024-10-08 18:36:22.380524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.122 qpair failed and we were unable to recover it. 00:28:29.122 [2024-10-08 18:36:22.380660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.123 [2024-10-08 18:36:22.380692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.123 qpair failed and we were unable to recover it. 00:28:29.123 [2024-10-08 18:36:22.380892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.123 [2024-10-08 18:36:22.380925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.123 qpair failed and we were unable to recover it. 00:28:29.123 [2024-10-08 18:36:22.381191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.123 [2024-10-08 18:36:22.381223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.123 qpair failed and we were unable to recover it. 00:28:29.123 [2024-10-08 18:36:22.381421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.123 [2024-10-08 18:36:22.381456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.123 qpair failed and we were unable to recover it. 00:28:29.123 [2024-10-08 18:36:22.381734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.123 [2024-10-08 18:36:22.381767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.123 qpair failed and we were unable to recover it. 00:28:29.123 [2024-10-08 18:36:22.381928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.123 [2024-10-08 18:36:22.381960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.123 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.382267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.382302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.382559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.382596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.382824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.382857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.383110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.383143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.383444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.383478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.383663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.383695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.383959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.383991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.384192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.384224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.384503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.384537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.384764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.384796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.384944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.384976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.385247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.385279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.385420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.385454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.385706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.385739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.386039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.386072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.386329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.386362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.386588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.386622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.386874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.386906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.387190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.387222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.387475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.387509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.387716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.387748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.388040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.388072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.388260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.388292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.388506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.388539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.388798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.388831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.401 qpair failed and we were unable to recover it. 00:28:29.401 [2024-10-08 18:36:22.388975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.401 [2024-10-08 18:36:22.389006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.389229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.389260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.389449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.389483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.389738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.389769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.389991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.390028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.390267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.390300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.390479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.390511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.390765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.390797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.391028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.391062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.391337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.391372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.391581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.391613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.391879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.391912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.392127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.392158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.392278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.392309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.392536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.392576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.392724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.392754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.392963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.392996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.393188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.393221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.393426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.393460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.393661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.393694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.393898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.393930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.394140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.394171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.394402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.394435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.394632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.394665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.394941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.394972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.395169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.395200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.395513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.395548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.395766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.395799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.396002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.396033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.396238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.396270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.396550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.396583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.396910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.396942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.397182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.397215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.397469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.397503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.397737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.397770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.397964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.397996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.398263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.398294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.398500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.398534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.398796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.398828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.399034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.399066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.402 qpair failed and we were unable to recover it. 00:28:29.402 [2024-10-08 18:36:22.399245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.402 [2024-10-08 18:36:22.399276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.399551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.399585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.399861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.399893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.400198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.400230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.400498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.400538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.400729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.400761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.400944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.400976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.401251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.401283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.401507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.401541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.401678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.401710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.402016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.402048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.402370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.402415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.402692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.402723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.403004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.403037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.403314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.403347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.403469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.403503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.403729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.403762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.404065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.404096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.404296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.404328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.404542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.404575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.404849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.404880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.405079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.405111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.405369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.405434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.405576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.405609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.405860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.405892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.406196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.406229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.406434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.406467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.406745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.406776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.406933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.406965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.407112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.407148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.407336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.407366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.407656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.407690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.407950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.407980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.408231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.408262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.408467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.408502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.408696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.408727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.408870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.408901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.409186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.409219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.409468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.409501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.409681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.409713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.403 qpair failed and we were unable to recover it. 00:28:29.403 [2024-10-08 18:36:22.409871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.403 [2024-10-08 18:36:22.409903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.410157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.410190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.410481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.410515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.410750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.410790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.411066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.411104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.411289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.411320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.411636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.411670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.411926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.411957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.412152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.412184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.412465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.412498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.412695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.412727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.412931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.412963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.413240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.413272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.413519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.413552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.413846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.413878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.414159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.414192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.414474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.414508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.414704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.414737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.414964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.414997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.415196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.415227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.415362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.415402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.415632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.415664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.415944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.415976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.416266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.416299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.416581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.416614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.416818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.416849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.417128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.417159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.417415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.417448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.417638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.417670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.417924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.417956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.418239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.418271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.418508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.418542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.418820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.418853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.419107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.419140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.419342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.419385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.419649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.419681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.419877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.419910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.420103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.420135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.420271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.420303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.420528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.420562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.404 qpair failed and we were unable to recover it. 00:28:29.404 [2024-10-08 18:36:22.420870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.404 [2024-10-08 18:36:22.420903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.421103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.421135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.421442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.421477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.421761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.421793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.422077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.422116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.422398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.422433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.422709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.422741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.423024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.423056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.423289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.423322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.423583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.423616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.423920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.423952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.424217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.424250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.424524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.424559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.424761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.424793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.425107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.425140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.425398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.425430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.425730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.425762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.425895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.425928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.426217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.426249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.426432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.426466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.426746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.426780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.427045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.427077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.427269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.427302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.427503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.427537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.427787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.427819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.428072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.428104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.428360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.428405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.428688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.428720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.428919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.428952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.429236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.429268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.429459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.429492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.429700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.429732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.430010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.430042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.430293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.430326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.430560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.405 [2024-10-08 18:36:22.430595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.405 qpair failed and we were unable to recover it. 00:28:29.405 [2024-10-08 18:36:22.430892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.430924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.431211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.431243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.431559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.431593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.431725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.431757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.432040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.432072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.432271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.432304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.432502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.432535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.432814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.432847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.433043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.433077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.433279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.433317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.433629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.433662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.433872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.433905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.434115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.434147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.434448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.434480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.434745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.434778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.434979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.435012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.435216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.435247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.435524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.435558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.435831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.435865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.436152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.436184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.436443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.436476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.436741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.436774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.437079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.437114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.437392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.437425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.437625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.437659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.437864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.437896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.438169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.438201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.438495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.438530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.438670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.438702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.438852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.438884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.439091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.439124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.439353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.439396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.439596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.439629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.439840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.439873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.440134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.440166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.440347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.440388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.440604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.440638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.440821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.440853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.441150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.441184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.441474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.406 [2024-10-08 18:36:22.441508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.406 qpair failed and we were unable to recover it. 00:28:29.406 [2024-10-08 18:36:22.441698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.441743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.441964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.442002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.442277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.442311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.442525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.442561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.442846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.442882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.443158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.443191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.443500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.443538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.443757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.443789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.444093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.444130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.444411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.444445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.444667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.444703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.444913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.444946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.445222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.445257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.445545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.445579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.445768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.445810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.446098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.446131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.446249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.446281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.446532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.446570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.446770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.446803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.447025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.447066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.447337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.447372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.447647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.447684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.447957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.447991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.448133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.448163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.448418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.448466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.448641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.448675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.448867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.448898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.449108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.449144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.449407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.449442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.449722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.449758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.449958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.449990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.450304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.450339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.450599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.450633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.450830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.450867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.451134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.451169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.451468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.451502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.451790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.451832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.452055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.452088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.452394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.452431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.452691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.407 [2024-10-08 18:36:22.452730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.407 qpair failed and we were unable to recover it. 00:28:29.407 [2024-10-08 18:36:22.453076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.453108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.453291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.453328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.453546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.453582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.453842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.453876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.454171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.454204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.454426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.454472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.454687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.454723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.454967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.454999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.455260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.455294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.455500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.455536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.455738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.455774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.455981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.456014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.456225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.456260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.456549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.456585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.456791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.456827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.457131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.457167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.457364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.457418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.457616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.457648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.457885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.457922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.458235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.458270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.458548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.458586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.458797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.458831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.459107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.459142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.459431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.459468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.459623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.459657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.459949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.459983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.460184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.460219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.460521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.460558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.460790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.460823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.461094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.461129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.461405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.461445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.461669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.461705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.461960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.462000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.462248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.462283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.462493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.462531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.462785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.462820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.463115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.463160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.463474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.463519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.463826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.463861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.464146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.464181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.408 [2024-10-08 18:36:22.464404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.408 [2024-10-08 18:36:22.464441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.408 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.464698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.464737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.465033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.465074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.465366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.465418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.465570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.465607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.465815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.465849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.466047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.466082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.466304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.466348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.466677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.466724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.466929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.466965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.467228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.467263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.467549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.467587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.467806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.467841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.468110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.468147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.468470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.468508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.468786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.468821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.468978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.469012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.469332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.469369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.469557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.469593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.469779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.469814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.470137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.470173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.470419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.470458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.470715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.470750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.471005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.471046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.471263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.471298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.471497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.471535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.471750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.471793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.472059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.472092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.472311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.472345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.472635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.472677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.472825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.472859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.473133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.473169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.473393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.473430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.473648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.473683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.473840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.473877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.474164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.474200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.474342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.474414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.409 qpair failed and we were unable to recover it. 00:28:29.409 [2024-10-08 18:36:22.474676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.409 [2024-10-08 18:36:22.474712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.474912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.474957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.475223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.475258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.475445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.475489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.475664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.475704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.476003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.476048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.476285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.476326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.476559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.476601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.476773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.476813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.477027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.477064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.477366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.477431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.477733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.477770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.477929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.477971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.478302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.478338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.478574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.478610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.478830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.478865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.479152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.479188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.479474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.479512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.479692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.479727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.479924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.479960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.480238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.480273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.480502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.480540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.480716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.480759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.481013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.481048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.481249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.481287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.481514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.481555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.481705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.481740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.481911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.481947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.482236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.482273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.482564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.482601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.482820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.482854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.483022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.483059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.483269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.483311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.483644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.483682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.483938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.483980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.484215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.484250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.484537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.484581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.484809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.484845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.485131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.485176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.410 qpair failed and we were unable to recover it. 00:28:29.410 [2024-10-08 18:36:22.485424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.410 [2024-10-08 18:36:22.485480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.485766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.485807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.486104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.486139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.486394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.486440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.486687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.486724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.486981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.487020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.487246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.487286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.487504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.487546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.487765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.487800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.488057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.488103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.488405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.488445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.488661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.488697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.489019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.489056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.489212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.489250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.489530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.489578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.489824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.489860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.490113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.490154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.490396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.490440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.490651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.490686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.490949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.490987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.491305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.491341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.491602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.491647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.491885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.491927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.492225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.492271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.492493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.492536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.492788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.492823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.493042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.493078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.493397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.493434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.493663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.493700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.493945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.493987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.494283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.494322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.494515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.494562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.494762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.494804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.494968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.495012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.495306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.495342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.495617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.495692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.495947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.496022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.496332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.496370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.496541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.496574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.496781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.496814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.411 [2024-10-08 18:36:22.497019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.411 [2024-10-08 18:36:22.497064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.411 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.497317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.497349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.497625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.497659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.497858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.497891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.498237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.498270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.498563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.498597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.498869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.498902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.499194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.499227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.499536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.499571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.499844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.499878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.500153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.500185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.500464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.500498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.500653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.500685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.500881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.500914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.501197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.501229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.501441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.501476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.501675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.501707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.501999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.502031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.502262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.502294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.502501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.502535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.502759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.502792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.503048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.503081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.503357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.503401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.503606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.503640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.503846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.503879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.504080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.504111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.504373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.504418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.504734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.504766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.505017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.505050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.505308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.505341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.505612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.505645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.505851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.505883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.506150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.506182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.506481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.506516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.506713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.506746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.506932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.506964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.507158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.507190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.507456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.507490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.507646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.507679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.507863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.507897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.412 qpair failed and we were unable to recover it. 00:28:29.412 [2024-10-08 18:36:22.508086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.412 [2024-10-08 18:36:22.508123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.508407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.508442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.508737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.508770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.508975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.509007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.509275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.509307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.509446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.509480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.509663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.509696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.509887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.509920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.510213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.510246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.510482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.510516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.510780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.510813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.511037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.511071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.511394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.511426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.511720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.511753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.511982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.512015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.512243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.512275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.512465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.512498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.512795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.512826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.513015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.513047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.513292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.513324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.513542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.513576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.513780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.513813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.514018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.514052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.514330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.514363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.514510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.514542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.514787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.514820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.515107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.515144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.515439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.515474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.515736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.515769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.515910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.515943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.516125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.516157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.516426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.516460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.516666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.516699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.516949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.516981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.517193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.517226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.517512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.517547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.517841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.517873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.518134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.518167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.518479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.518513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.518767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.518798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.413 [2024-10-08 18:36:22.519074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.413 [2024-10-08 18:36:22.519112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.413 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.519356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.519396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.519620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.519652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.519950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.519981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.520205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.520237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.520506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.520540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.520797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.520831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.521149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.521182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.521405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.521440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.521590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.521622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.521765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.521797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.521989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.522021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.522219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.522252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.522503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.522536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.522839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.522874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.523197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.523230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.523440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.523475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.523725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.523762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.524012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.524045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.524269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.524302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.524503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.524539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.524750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.524782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.525007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.525041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.525267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.525300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.525438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.525471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.525725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.525758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.525922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.525960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.526233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.526266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.526571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.526605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.526794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.526827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.527122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.527154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.527412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.527445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.527746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.527784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.527988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.528021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.528299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.528332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.528607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.528640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.528801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.414 [2024-10-08 18:36:22.528834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.414 qpair failed and we were unable to recover it. 00:28:29.414 [2024-10-08 18:36:22.528986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.529019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.529216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.529249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.529397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.529432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.529652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.529697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.529918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.529950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.530166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.530199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.530512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.530552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.530759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.530792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.531059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.531109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.531257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.531290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.531474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.531507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.531769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.531802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.531990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.532023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.532295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.532327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.532543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.532577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.532827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.532860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.533140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.533172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.533396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.533430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.533583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.533615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.533751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.533783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.533987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.534019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.534293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.534325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.534591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.534625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.534852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.534883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.535171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.535203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.535463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.535497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.535638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.535670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.535824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.535856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.536171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.536203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.536366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.536422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.536683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.536716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.536855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.536887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.537139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.537171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.537463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.537497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.537723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.537756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.537967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.537998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.538251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.538282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.538540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.538574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.538729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.538762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.538981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.415 [2024-10-08 18:36:22.539013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.415 qpair failed and we were unable to recover it. 00:28:29.415 [2024-10-08 18:36:22.539315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.539348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.539513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.539546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.539736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.539768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.539983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.540020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.540221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.540254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.540474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.540508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.540659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.540692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.540836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.540868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.541123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.541156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.541358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.541405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.541691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.541724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.542013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.542047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.542274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.542307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.542530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.542563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.542734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.542766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.543020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.543053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.543324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.543357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.543618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.543651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.543806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.543838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.544044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.544077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.544362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.544408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.544565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.544598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.544878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.544910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.545200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.545232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.545454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.545489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.545768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.545800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.546046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.546078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.546347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.546390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.546609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.546641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.546942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.546976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.547245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.547278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.547536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.547570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.547873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.547905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.548148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.548180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.548405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.548440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.548644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.548677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.548963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.548996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.549291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.549323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.549570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.549605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.416 qpair failed and we were unable to recover it. 00:28:29.416 [2024-10-08 18:36:22.549881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.416 [2024-10-08 18:36:22.549913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.550170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.550202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.550503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.550538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.550760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.550799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.551058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.551096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.551392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.551426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.551586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.551619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.551896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.551928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.552197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.552230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.552532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.552566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.552833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.552864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.553200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.553233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.553506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.553540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.553750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.553782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.554100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.554132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.554411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.554445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.554651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.554683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.554876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.554908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.555216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.555249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.555444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.555478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.555625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.555657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.555873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.555906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.556133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.556164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.556366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.556412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.556624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.556657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.556863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.556896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.557220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.557253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.557453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.557487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.557834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.557866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.557995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.558028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.558302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.558334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.558538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.558576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.558797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.558833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.559043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.559084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.559399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.559433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.559667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.559706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.417 [2024-10-08 18:36:22.559912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.417 [2024-10-08 18:36:22.559945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.417 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.560203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.560235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.560513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.560547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.560745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.560778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.560984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.561016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.561162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.561198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.561511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.561545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.561841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.561873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.562244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.562284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.562585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.562619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.562770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.562802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.563001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.563034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.563186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.563219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.563417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.563451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.563657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.563690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.563885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.563918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.564064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.564096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.564299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.564331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.564571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.564605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.564763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.564796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.565146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.565179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.565401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.565435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.565702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.565735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.565992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.566025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.566220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.566252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.566447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.566481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.566677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.566710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.566927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.566960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.567215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.567248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.567430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.567464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.567734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.567765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.567910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.567943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.568146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.568178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.568450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.568484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.568687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.418 [2024-10-08 18:36:22.568719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.418 qpair failed and we were unable to recover it. 00:28:29.418 [2024-10-08 18:36:22.568940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.568973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.569231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.569263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.569500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.569534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.569745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.569778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.570096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.570129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.570412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.570446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.570663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.570695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.570894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.570927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.571128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.571160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.571354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.571413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.571559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.571592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.571788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.571821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.572048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.572087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.572284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.572323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.572468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.572502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.572635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.572667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.572868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.572901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.573090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.573123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.573385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.573419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.573627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.573660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.573951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.573982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.574196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.574228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.574512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.574547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.574686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.574718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.574862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.574895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.575184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.575217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.575514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.575548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.575781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.575814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.576112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.576145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.576339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.576370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.576590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.576623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.576769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.576801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.576956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.576988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.577260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.577293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.577515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.577549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.577708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.577740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.577938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.419 [2024-10-08 18:36:22.577971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.419 qpair failed and we were unable to recover it. 00:28:29.419 [2024-10-08 18:36:22.578272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.578304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.578534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.578569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.578795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.578828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.579028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.579060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.579363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.579408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.579611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.579644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.579854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.579886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.580152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.580185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.580485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.580519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.580667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.580700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.580909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.580942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.581220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.581253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.581492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.581527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.581684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.581717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.581933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.581965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.582254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.582286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.582505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.582553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.582834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.582867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.583130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.583163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.583394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.583427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.583630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.583663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.583799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.583831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.584104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.584136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.584422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.584456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.584679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.584712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.584922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.584954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.585145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.585177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.585416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.585451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.585702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.585735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.585933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.585965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.586104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.586136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.586409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.586443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.586736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.586769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.586993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.587026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.587221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.587254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.587531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.587565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.587763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.587796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.587943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.587976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.420 qpair failed and we were unable to recover it. 00:28:29.420 [2024-10-08 18:36:22.588205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.420 [2024-10-08 18:36:22.588238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.588484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.588517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.588783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.588823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.589040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.589073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.589358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.589404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.589538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.589577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.589714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.589746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.589948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.589979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.590202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.590235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.590455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.590490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.590743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.590775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.591093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.591127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.591400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.591434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.591660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.591692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.591888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.591920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.592237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.592270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.592526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.592559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.592755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.592787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.592989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.593021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.593307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.593340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.593646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.593680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.593816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.593847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.594056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.594090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.594365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.594414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.594636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.594669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.594805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.594837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.595053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.595085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.595388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.595423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.595677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.595709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.595962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.595995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.596269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.596302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.596520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.596553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.596792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.596825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.596974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.597006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.597258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.597291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.597488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.597523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.597717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.597750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.598026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.598059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.598254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.421 [2024-10-08 18:36:22.598286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.421 qpair failed and we were unable to recover it. 00:28:29.421 [2024-10-08 18:36:22.598544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.598579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.598788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.598820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.599086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.599119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.599327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.599360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.599510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.599543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.599746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.599779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.599984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.600024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.600232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.600264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.600453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.600487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.600765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.600797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.601075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.601108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.601421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.601456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.601663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.601697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.601909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.601942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.602216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.602249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.602445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.602480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.602614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.602647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.602925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.602958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.603149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.603181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.603385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.603419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.603625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.603658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.603886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.603918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.604207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.604240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.604489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.604522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.604687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.604720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.604922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.604956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.605179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.605211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.605538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.605573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.605731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.605765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.605971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.606004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.606316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.606349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.606621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.606655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.606792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.606825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.607061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.607093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.607309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.607343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.607586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.422 [2024-10-08 18:36:22.607619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.422 qpair failed and we were unable to recover it. 00:28:29.422 [2024-10-08 18:36:22.607891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.607925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.608127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.608159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.608352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.608397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.608594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.608626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.608823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.608855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.609128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.609161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.609454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.609488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.609696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.609728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.609955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.609987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.610267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.610299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.610610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.610650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.610847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.610879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.611115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.611147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.611295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.611326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.611554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.611589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.611736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.611769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.612078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.612110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.612249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.612284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.612497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.612532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.612738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.612770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.612917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.612949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.613226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.613258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.613544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.613579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.613786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.613819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.613992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.614025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.614233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.614265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.614487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.614520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.614727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.614759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.615034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.615067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.615210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.615242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.615529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.615563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.615820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.615853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.616126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.616158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.616451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.616484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.616644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.423 [2024-10-08 18:36:22.616676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.423 qpair failed and we were unable to recover it. 00:28:29.423 [2024-10-08 18:36:22.616867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.616899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.617237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.617269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.617474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.617508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.617713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.617745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.617891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.617922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.618156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.618190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.618493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.618527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.618659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.618691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.618892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.618924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.619127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.619159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.619373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.619418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.619603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.619635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.619780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.619812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.620069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.620102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.620357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.620402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.620558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.620598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.620892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.620925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.621080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.621112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.621365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.621411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.621608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.621642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.621830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.621862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.622187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.622219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.622419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.622454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.622682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.622714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.622925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.622958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.623148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.623179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.623362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.623408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.623614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.623646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.623803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.623836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.624149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.624183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.624457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.624490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.624695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.624726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.624921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.624954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.625237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.625269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.625521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.625555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.625750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.625782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.625938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.424 [2024-10-08 18:36:22.625971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.424 qpair failed and we were unable to recover it. 00:28:29.424 [2024-10-08 18:36:22.626173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.626205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.626511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.626546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.626846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.626878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.627112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.627145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.627410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.627444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.627652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.627684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.627887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.627919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.628129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.628161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.628439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.628474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.628728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.628761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.628967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.628999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.629279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.629310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.629527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.629562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.629764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.629798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.630124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.630156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.630360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.630418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.630632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.630665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.630871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.630904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.631215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.631253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.631487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.631523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.631741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.631773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.631905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.631938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.632141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.632175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.632359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.632401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.632687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.632720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.632996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.633029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.633319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.633350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.633567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.633601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.633832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.633865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.634086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.634119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.634408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.634442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.634594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.634626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.634797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.634830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.635119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.635152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.635434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.635468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.635669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.635701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.635955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.635989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.636211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.636243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.636510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.425 [2024-10-08 18:36:22.636547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.425 qpair failed and we were unable to recover it. 00:28:29.425 [2024-10-08 18:36:22.636819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.636852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.637108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.637141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.637326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.637359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.637575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.637607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.637762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.637795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.638035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.638068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.638357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.638402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.638709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.638742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.638942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.638975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.639232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.639264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.639459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.639493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.639652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.639685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.639919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.639951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.640233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.640266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.640475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.640509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.640643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.640676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.640870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.640903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.641206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.641238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.641415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.641449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.641651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.641692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.641947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.641979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.642269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.642302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.642516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.642551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.642753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.642786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.643060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.643093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.643297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.643330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.643555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.643589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.643791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.643824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.643967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.644000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.644211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.644244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.644530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.644564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.644771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.644805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.645122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.645160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.645355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.645411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.645694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.645727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.645919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.645952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.646146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.646180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.646387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.646421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.646628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.646665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.646948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.426 [2024-10-08 18:36:22.646980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.426 qpair failed and we were unable to recover it. 00:28:29.426 [2024-10-08 18:36:22.647232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.647273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.647590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.647626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.647865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.647912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.648242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.648293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.648589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.648637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.648979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.649017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.649309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.649342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.649571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.649606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.649886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.649927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.650082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.650116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.650407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.650445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.650756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.650814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.651164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.651210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.651433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.651472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.651680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.651713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.651978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.652014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.652309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.652342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.652657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.652712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.653012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.653056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.653360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.653428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.653669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.653704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.653942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.653975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.654117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.654149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.654359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.654408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.654575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.654608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.654743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.654776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.654980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.655012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.655224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.655257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.655453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.655488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.655724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.655757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.656070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.656103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.656310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.656342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.656611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.656645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.656939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.656973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.657185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.427 [2024-10-08 18:36:22.657218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.427 qpair failed and we were unable to recover it. 00:28:29.427 [2024-10-08 18:36:22.657514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.657549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.657815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.657849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.658072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.658106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.658360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.658411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.658612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.658645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.658845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.658877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.659116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.659148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.659393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.659428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.659682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.659714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.659916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.659949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.660106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.660140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.660274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.660307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.660573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.660608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.660834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.660868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.661222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.661255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.661405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.661440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.661565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.661598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.661877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.661910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.662194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.662226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.662541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.662579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.662724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.662757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.662961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.662995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.663211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.663245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.663468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.663504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.663636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.663675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.663884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.663918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.664227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.664261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.664446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.664481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.664631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.664665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.664873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.664906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.665123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.665154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.665468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.665505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.665632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.665665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.665916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.665950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.666105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.666139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.666328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.666363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.666609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.666645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.666887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.666920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.667130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.667165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.428 [2024-10-08 18:36:22.667355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.428 [2024-10-08 18:36:22.667407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.428 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.667625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.667659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.667927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.667960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.668218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.668251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.668412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.668447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.668645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.668677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.668873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.668905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.669240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.669274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.669502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.669537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.669684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.669717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.669939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.669973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.670135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.670168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.670368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.670418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.670631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.670663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.670919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.670952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.671221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.671253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.671463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.671499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.671654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.671687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.671915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.671946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.672243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.672277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.672482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.672518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.672722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.672754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.672976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.673010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.673154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.673187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.673394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.673428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.673578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.673616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.673770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.673802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.674012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.674045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.674278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.674311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.674515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.674550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.674746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.674779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.675066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.675098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.675255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.675289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.675530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.675565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.675790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.675823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.676020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.676053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.676260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.676292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.676481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.676516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.676685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.676718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.676931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.676964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.429 [2024-10-08 18:36:22.677188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.429 [2024-10-08 18:36:22.677223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.429 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.677478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.677515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.677726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.677758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.677960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.677994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.678260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.678293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.678447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.678483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.678745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.678777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.678929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.678961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.679164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.679198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.679405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.679440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.679592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.679625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.681609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.681672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.681862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.681896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.682167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.682201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.682407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.682443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.682613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.682646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.682876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.682908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.683139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.683172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.683403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.683439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.683647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.683681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.683944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.683976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.684175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.684209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.684412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.684448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.684731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.684766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.685043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.685076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.685303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.685344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.685495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.685529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.685681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.685714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.685916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.685953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.686155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.686189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.686392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.686430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.686574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.686608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.686903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.686935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.687209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.687243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.687449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.687486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.687645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.687680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.687941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.687974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.688192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.688225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.688487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.688522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.688685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.688718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.430 [2024-10-08 18:36:22.688868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.430 [2024-10-08 18:36:22.688900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.430 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.689050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.689083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.689362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.689410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.689671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.689703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.690091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.690124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.690435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.690472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.690640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.690673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.690869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.690903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.691208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.691241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.691471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.691506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.691722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.691755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.691967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.692010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.692219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.692254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.692451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.692484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.692632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.692666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.692800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.692836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.692960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.692990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.693191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.693223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.693452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.693486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.693688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.693721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.693902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.693935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.694142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.694174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.694435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.694469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.694633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.694665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.694896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.694928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.695133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.695172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.695446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.695484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.695632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.695665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.695921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.695954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.696161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.696195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.696406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.696440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.696698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.696731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.696986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.697019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.697325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.697357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.697592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.697626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.697859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.697893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.698108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.698142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.698289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.698322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.698540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.698574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.698782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.698815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.431 qpair failed and we were unable to recover it. 00:28:29.431 [2024-10-08 18:36:22.698985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.431 [2024-10-08 18:36:22.699017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.699220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.699253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.699523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.699557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.699763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.699796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.700018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.700051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.700257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.700289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.700512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.700546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.700816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.700850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.701150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.701182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.701473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.701506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.701671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.701704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.701855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.701887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.702038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.702072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.702282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.702315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.702484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.702518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.702744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.702777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.702983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.703015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.703291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.703324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.703645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.703680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.703965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.703998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.704208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.432 [2024-10-08 18:36:22.704242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.432 qpair failed and we were unable to recover it. 00:28:29.432 [2024-10-08 18:36:22.704514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.713 [2024-10-08 18:36:22.704548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.713 qpair failed and we were unable to recover it. 00:28:29.713 [2024-10-08 18:36:22.704796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.713 [2024-10-08 18:36:22.704831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.713 qpair failed and we were unable to recover it. 00:28:29.713 [2024-10-08 18:36:22.705055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.713 [2024-10-08 18:36:22.705089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.713 qpair failed and we were unable to recover it. 00:28:29.713 [2024-10-08 18:36:22.705291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.713 [2024-10-08 18:36:22.705324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.713 qpair failed and we were unable to recover it. 00:28:29.713 [2024-10-08 18:36:22.705596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.713 [2024-10-08 18:36:22.705636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.713 qpair failed and we were unable to recover it. 00:28:29.713 [2024-10-08 18:36:22.705795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.713 [2024-10-08 18:36:22.705828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.713 qpair failed and we were unable to recover it. 00:28:29.713 [2024-10-08 18:36:22.706077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.706111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.706329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.706361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.706574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.706608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.706858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.706891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.707174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.707208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.707450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.707484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.707634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.707666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.707873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.707906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.708120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.708153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.708458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.708493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.708635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.708667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.708819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.708851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.709014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.709048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.709246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.709277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.709475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.709514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.709671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.709704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.709901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.709934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.710234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.710268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.710544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.710578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.710721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.710755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.710904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.710937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.711235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.711268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.711462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.711496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.711718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.711752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.711895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.711928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.712297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.712354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.712588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.712618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.712884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.712914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.713045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.713072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.713291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.713318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.713573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.713601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.713822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.713849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.713999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.714026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.714206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.714234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.714511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.714540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.714687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.714714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.714857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.714884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.715080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.715106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.714 [2024-10-08 18:36:22.715307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.714 [2024-10-08 18:36:22.715333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.714 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.715551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.715581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.715772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.715800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.716001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.716028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.716213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.716240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.716450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.716478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.716676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.716703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.716983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.717010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.717224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.717251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.717394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.717423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.717672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.717698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.717899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.717926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.718111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.718137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.718336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.718362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.718562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.718596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.718814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.718840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.719035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.719061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.719201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.719227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.719426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.719455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.719741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.719768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.719923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.719950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.720093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.720120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.720365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.720402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.720591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.720618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.720803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.720829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.721010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.721037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.721325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.721351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.721587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.721615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.721765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.721791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.721911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.721941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.722077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.722105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.722216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.722249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.722384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.722413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.722539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.722566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.722824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.722850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.722992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.723019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.723142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.723169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.723298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.723324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.723483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.723512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.723654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.723679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.715 [2024-10-08 18:36:22.723863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.715 [2024-10-08 18:36:22.723890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.715 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.724087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.724113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.724402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.724432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.724564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.724592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.724777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.724803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.724978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.725006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.725198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.725225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.725355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.725394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.725531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.725557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.725785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.725813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.725934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.725961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.726085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.726113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.726222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.726252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.726458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.726487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.726632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.726659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.726791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.726825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.726972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.726999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.727211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.727237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.727364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.727398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.727539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.727566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.727752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.727779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.727910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.727937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.728065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.728093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.728234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.728261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.728388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.728419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.728553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.728580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.728813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.728839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.729033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.729058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.729183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.729209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.729334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.729364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.729513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.729540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.729794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.729821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.730069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.730095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.730271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.730299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.730433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.730462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.730596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.730622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.730741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.730772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.730882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.730911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.731118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.731144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.731409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.731438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.716 [2024-10-08 18:36:22.731692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.716 [2024-10-08 18:36:22.731719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.716 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.731902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.731928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.732176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.732210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.732394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.732422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.732620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.732647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.732763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.732791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.732983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.733009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.733138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.733164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.733293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.733319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.733599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.733627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.733746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.733772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.733896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.733922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.734054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.734080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.734360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.734401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.734547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.734573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.734754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.734780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.734974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.735001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.735129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.735155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.735285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.735311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.735556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.735583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.735771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.735798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.735983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.736008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.736215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.736242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.736439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.736467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.736734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.736760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.736956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.736982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.737108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.737134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.737258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.737284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.737468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.737497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.737641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.737667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.737791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.737818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.738051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.738077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.738258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.738284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.738416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.738443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.738662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.738689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.738888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.738915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.739119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.739145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.739265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.739291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.739486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.739514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.739696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.739723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.717 qpair failed and we were unable to recover it. 00:28:29.717 [2024-10-08 18:36:22.739852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.717 [2024-10-08 18:36:22.739878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.740066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.740092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.740283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.740309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.740432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.740470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.740592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.740618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.740737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.740763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.741017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.741044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.741178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.741204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.741317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.741348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.741551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.741578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.741697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.741723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.741909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.741934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.742124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.742151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.742262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.742289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.742466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.742493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.742681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.742707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.742905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.742931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.743051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.743077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.743184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.743215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.743406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.743434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.743613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.743640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.743761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.743789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.743977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.744005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.744136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.744161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.744335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.744361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.744504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.744531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.744708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.744734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.744859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.744885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.744999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.745029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.745140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.745169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.745387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.745421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.745544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.745570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.745753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.745780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.745884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.745916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.718 qpair failed and we were unable to recover it. 00:28:29.718 [2024-10-08 18:36:22.746137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.718 [2024-10-08 18:36:22.746163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.746435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.746463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.746649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.746675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.746946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.746971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.747126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.747152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.747329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.747356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.747518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.747545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.747724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.747751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.747885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.747912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.748036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.748063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.748196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.748236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.748431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.748467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.748585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.748612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.748807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.748833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.748942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.748974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.749157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.749184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.749400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.749427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.749555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.749581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.749696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.749724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.749833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.749863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.749976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.750004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.750132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.750159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.750403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.750429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.750608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.750635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.750908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.750935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.751070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.751097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.751221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.751248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.751407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.751434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.751620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.751647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.751833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.751859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.751982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.752007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.752191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.752241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.752407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.752457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.752801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.752839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.752993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.753040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.753184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.753230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.753471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.753513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.753706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.753742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.753944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.753971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.754083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.719 [2024-10-08 18:36:22.754111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.719 qpair failed and we were unable to recover it. 00:28:29.719 [2024-10-08 18:36:22.754225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.754252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.754456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.754485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.754674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.754701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.754889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.754916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.755090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.755115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.755244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.755283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.755463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.755490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.755682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.755706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.755833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.755859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.755977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.756001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.756123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.756148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.756363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.756399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.756527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.756556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.756688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.756713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.756827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.756852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.756962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.756990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.757181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.757205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.757308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.757335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.757518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.757544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.757726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.757750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.757870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.757894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.758018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.758043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.758265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.758306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.758473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.758520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.758729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.758776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.758936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.758982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.759208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.759247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.759494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.759521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.759640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.759665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.759779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.759804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.759917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.759942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.760060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.760084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.760264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.760289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.760413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.760443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.760583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.760607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.760716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.760744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.760925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.720 [2024-10-08 18:36:22.760951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.720 qpair failed and we were unable to recover it. 00:28:29.720 [2024-10-08 18:36:22.761080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.761104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.761283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.761309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.761498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.761523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.761713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.761737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.761851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.761876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.762050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.762075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.762245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.762270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.762396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.762422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.762553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.762578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.762700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.762747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.762907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.762951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.763108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.763153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.763299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.763343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.763518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.763544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.763665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.763689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.763878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.763903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.764017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.764041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.764173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.764198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.764369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.764406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.764614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.764638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.764841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.764866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.765047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.765072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.765280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.765321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.765484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.765532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.765828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.765868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.766086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.766126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.766350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.766410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.766689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.766729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.766955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.767003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.767297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.767336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.767510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.767533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.767639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.767662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.767783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.767806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.767940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.767963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.768200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.768223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.768411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.768434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.721 [2024-10-08 18:36:22.768556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.721 [2024-10-08 18:36:22.768578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.721 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.768688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.768710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.768817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.768839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.768946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.768968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.769070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.769094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.769254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.769276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.769444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.769467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.769585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.769607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.769736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.769759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.769871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.769894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.769996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.770018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.770204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.770244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.770393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.770440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.770603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.770649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.770797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.770819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.770983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.771005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.771177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.771199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.771316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.771337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.771456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.771479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.771666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.771687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.771934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.771973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.772145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.772184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.772326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.772347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.772462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.772486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.772676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.772698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.772795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.772817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.772937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.772959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.773130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.773153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.773296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.773335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.773499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.773544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.773768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.773808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.773950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.773994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.774136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.774180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.774461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.774503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.774665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.722 [2024-10-08 18:36:22.774716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.722 qpair failed and we were unable to recover it. 00:28:29.722 [2024-10-08 18:36:22.774890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.774912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.775078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.775099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.775195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.775220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.775407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.775429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.775555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.775575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.775741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.775787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.775934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.775980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.776305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.776345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.776464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.776485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.776585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.776604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.776700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.776721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.776879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.776900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.777089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.777130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.777337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.777390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.777608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.777647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.777829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.777849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.777937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.777958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.778152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.778171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.778362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.778390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.778578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.778598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.778756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.778775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.779003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.779023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.779136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.779168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.779272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.779291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.779446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.779467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.779638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.779663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.779819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.779838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.779948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.779968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.780057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.780078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.780308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.780327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.780432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.780452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.780564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.780584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.780689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.780709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.780895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.780914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.781025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.781045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.723 [2024-10-08 18:36:22.781143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.723 [2024-10-08 18:36:22.781163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.723 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.781256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.781275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.781365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.781406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.781585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.781607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.781870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.781889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.781983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.782002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.782096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.782115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.782271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.782291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.782400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.782421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.782657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.782697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.782837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.782881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.783116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.783155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.783294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.783339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.783581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.783602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.783701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.783720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.783885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.783906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.784077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.784097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.784198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.784218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.784346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.784366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.784479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.784499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.784675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.784694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.784859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.784899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.785053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.785099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.785249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.785296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.785611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.785638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.785827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.785853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.785977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.786003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.786127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.786154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.786279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.786305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.786488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.786516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.786693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.786719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.786901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.786948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.787154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.787193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.787327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.787372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.787650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.787677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.787847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.787873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.788057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.788084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.788206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.788233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.788419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.724 [2024-10-08 18:36:22.788446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.724 qpair failed and we were unable to recover it. 00:28:29.724 [2024-10-08 18:36:22.788568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.788594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.788770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.788796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.789037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.789063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.789177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.789208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.789416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.789444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.789575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.789602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.789744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.789771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.789893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.789919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.790027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.790057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.790235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.790261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.790371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.790413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.790522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.790551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.790731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.790757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.790890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.790916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.791088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.791114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.791299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.791325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.791440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.791472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.791592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.791620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.791833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.791860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.791984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.792018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.792213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.792239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.792372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.792412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.792540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.792566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.792809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.792836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.793011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.793038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.793164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.793190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.793371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.793407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.793618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.793645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.793766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.793792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.793910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.793937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.794064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.794091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.794202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.794230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.794422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.794450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.794563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.794590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.794765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.794791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.794900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.794929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.795113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.795140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.725 qpair failed and we were unable to recover it. 00:28:29.725 [2024-10-08 18:36:22.795262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.725 [2024-10-08 18:36:22.795289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.795405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.795438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.795614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.795640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.795810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.795836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.795960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.795987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.796119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.796143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.796311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.796336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.796546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.796571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.796830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.796854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.797073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.797099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.797287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.797312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.797533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.797560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.797764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.797804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.797973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.798017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.798174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.798221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.798391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.798422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.798672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.798697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.798878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.798902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.799000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.799030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.799209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.799233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.799401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.799427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.799553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.799599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.799755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.799802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.799953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.800011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.800159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.800211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.800373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.800428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.800657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.800696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.800854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.800903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.801005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.801033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.801216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.801241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.801433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.801458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.801642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.801666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.801780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.801805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.801982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.802006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.802131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.802157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.802258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.802286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.802484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.802509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.802684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.802709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.802905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.802930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.803163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.726 [2024-10-08 18:36:22.803187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.726 qpair failed and we were unable to recover it. 00:28:29.726 [2024-10-08 18:36:22.803366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.803405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.803523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.803548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.803756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.803781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.803895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.803920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.804020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.804049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.804218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.804242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.804361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.804394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.804518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.804542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.804800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.804839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.805065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.805104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.805276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.805309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.805434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.805460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.805573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.805598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.805700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.805726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.805909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.805934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.806051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.806076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.806257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.806306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.806592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.806632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.806841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.806880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.807016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.807060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.807204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.807248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.807400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.807448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.807604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.807628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.807788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.807811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.807980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.808004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.808186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.808210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.808389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.808414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.808591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.808615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.808803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.808826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.808932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.808955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.809084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.809107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.809279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.809332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.809596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.809670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.809929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.727 [2024-10-08 18:36:22.810001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.727 qpair failed and we were unable to recover it. 00:28:29.727 [2024-10-08 18:36:22.810148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.810183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.810369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.810425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.810627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.810659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.810772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.810804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.810992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.811025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.811209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.811241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.811412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.811446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.811686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.811718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.811837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.811868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.812041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.812072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.812279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.812311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.812511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.812545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.812655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.812687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.812808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.812840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.812945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.812978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.813153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.813185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.813304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.813336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.813470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.813504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.813626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.813658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.813850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.813882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.814059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.814091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.814330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.814362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.814500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.814532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.814760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.814791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.814913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.814945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.815126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.815158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.815268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.815299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.815486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.815519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.815633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.815665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.815787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.815819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.815930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.815968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.816155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.816187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.816387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.816421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.816529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.816561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.816675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.816707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.816881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.816913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.817183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.817216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.817458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.817493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.817685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.817717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.728 [2024-10-08 18:36:22.817835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.728 [2024-10-08 18:36:22.817867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.728 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.818066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.818099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.818280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.818312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.818505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.818539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.818654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.818686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.818803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.818836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.818958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.818990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.819182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.819213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.819397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.819431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.819625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.819657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.819778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.819810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.819938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.819970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.820149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.820182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.820370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.820415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.820626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.820658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.820783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.820815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.820986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.821018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.821125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.821157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.821265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.821297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.821436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.821470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.821580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.821613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.821741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.821774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.821970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.822001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.822105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.822137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.822405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.822438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.822619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.822652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.822833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.822865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.823038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.823070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.823186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.823218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.823326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.823359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.823620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.823653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.823850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.823888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.824072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.824103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.824232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.824264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.824388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.824421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.824527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.824559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.824744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.824776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.824961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.824993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.825144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.825176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.825363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.825408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.729 qpair failed and we were unable to recover it. 00:28:29.729 [2024-10-08 18:36:22.825588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.729 [2024-10-08 18:36:22.825622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.825802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.825834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.826005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.826037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.826164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.826197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.826481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.826518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.826642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.826675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.826844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.826876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.826992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.827023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.827126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.827158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.827287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.827319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.827524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.827558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.827821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.827853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.828035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.828067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.828181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.828213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.828407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.828440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.828567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.828600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.828792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.828824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.828946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.828978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.829108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.829141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.829328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.829360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.829479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.829512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.829625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.829656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.829848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.829880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.829996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.830027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.830199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.830231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.830353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.830397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.830505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.830536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.830727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.830760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.830940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.830972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.831180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.831212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.831313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.831343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.831548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.831588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.831711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.831742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.831978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.832009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.832135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.832166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.832285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.832315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.832522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.832555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.832688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.832720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.832835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.832868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.833044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.833075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.730 qpair failed and we were unable to recover it. 00:28:29.730 [2024-10-08 18:36:22.833176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.730 [2024-10-08 18:36:22.833207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.833329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.833360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.833493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.833524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.833705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.833738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.833860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.833891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.834021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.834053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.834232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.834265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.834449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.834483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.834737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.834768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.834878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.834909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.835095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.835126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.835254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.835285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.835559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.835592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.835778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.835809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.835933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.835965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.836092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.836123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.836227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.836260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.836368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.836407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.836588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.836619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.836860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.836892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.837060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.837092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.837218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.837252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.837361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.837401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.837519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.837551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.837673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.837704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.837880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.837911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.838023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.838055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.838163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.838193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.838361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.838405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.838524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.838556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.838747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.838780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.838906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.838943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.839072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.839104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.839340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.839373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.839602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.839635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.839736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.839767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.840020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.731 [2024-10-08 18:36:22.840051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.731 qpair failed and we were unable to recover it. 00:28:29.731 [2024-10-08 18:36:22.840229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.840259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.840434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.840468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.840601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.840633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.840763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.840794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.841020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.841052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.841242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.841274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.841527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.841566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.841699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.841730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.841877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.841909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.842043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.842074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.842204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.842236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.842336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.842367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.842552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.842584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.842689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.842721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.842846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.842876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.842988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.843019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.843126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.843158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.843271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.843303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.843411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.843443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.843627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.843659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.843766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.843796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.843994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.844027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.844237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.844270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.844448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.844482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.844603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.844633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.844746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.844782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.844967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.844999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.845167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.845199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.845314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.845344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.845493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.845526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.845652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.845683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.845848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.845879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.845996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.846027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.846262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.846294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.846401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.846441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.846719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.846752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.846874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.846905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.847025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.847057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.847295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.847326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-08 18:36:22.847453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.732 [2024-10-08 18:36:22.847485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.847607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.847639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.847894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.847926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.848098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.848130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.848311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.848343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.848534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.848567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.848698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.848728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.848861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.848892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.849019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.849050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.849247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.849278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.849472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.849504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.849632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.849663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.849854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.849887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.850071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.850103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.850217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.850248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.850360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.850400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.850637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.850669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.850780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.850811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.850982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.851013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.851232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.851264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.851399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.851432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.851675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.851707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.851896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.851928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.852048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.852079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.852196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.852227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.852420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.852453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.852595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.852626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.852806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.852836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.852943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.852975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.853150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.853182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.853432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.853466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.853640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.853671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.853778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.853810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.853940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.853971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.854087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.854118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.854234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.854270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.854441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.854472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.854599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.854630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.854742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.854772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.854880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.854910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.855142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.733 [2024-10-08 18:36:22.855174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-08 18:36:22.855293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.855325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.855474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.855506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.855625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.855655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.855910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.855942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.856051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.856082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.856264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.856295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.856412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.856444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.856564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.856596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.856730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.856761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.856870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.856902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.857088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.857120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.857313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.857344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.857462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.857495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.857625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.857656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.857767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.857798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.857990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.858022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.858188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.858219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.858358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.858416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.858530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.858561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.858743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.858775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.858991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.859023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.859154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.859186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.859310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.859341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.859524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.859558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.859684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.859721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.859828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.859859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.859976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.860008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.860194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.860224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.860402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.860434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.860548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.860579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.860759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.860791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.860893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.860924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.861040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.861072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.861189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.861220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.861321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.861358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.861539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.861572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.861697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.861728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.861903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.861934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.862049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.862080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.862205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.734 [2024-10-08 18:36:22.862235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-08 18:36:22.862479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.862512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.862647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.862678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.862795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.862826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.863006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.863037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.863160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.863192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.863365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.863406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.863515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.863547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.863768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.863798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.863988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.864020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.864133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.864164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.864273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.864305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.864487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.864520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.864636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.864668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.864788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.864820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.864933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.864965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.865096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.865128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.865294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.865326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.865459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.865492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.865598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.865629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.865740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.865772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.865909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.865941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.866100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.866170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.866324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.866360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.866507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.866541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.866648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.866680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.866862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.866894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.867067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.867099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.867292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.867324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.867442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.867474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.867684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.867716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.867828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.867860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.868030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.868062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.868231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.868264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.868436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.868470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.868655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.868695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.868887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.868919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.869043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.869074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.869194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.869225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.869358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.869403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.869527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.735 [2024-10-08 18:36:22.869559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.735 qpair failed and we were unable to recover it. 00:28:29.735 [2024-10-08 18:36:22.869818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.869852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.870026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.870058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.870253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.870286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.870407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.870441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.870577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.870609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.870867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.870900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.871006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.871038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.871164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.871196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.871306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.871339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.871549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.871582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.871784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.871816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.871942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.871975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.872161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.872193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.872365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.872409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.872585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.872618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.872831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.872863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.873041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.873073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.873309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.873341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.873538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.873572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.873699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.873731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.873970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.874002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.874148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.874191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.874317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.874344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.874511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.874546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.874678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.874710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.874945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.874976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.875108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.875139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.875373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.875434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.875561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.875594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.875775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.875807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.876007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.876039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.876220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.876252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.876374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.876419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.876608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.876640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.876760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.876798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.877035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.736 [2024-10-08 18:36:22.877066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.736 qpair failed and we were unable to recover it. 00:28:29.736 [2024-10-08 18:36:22.877180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.877213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.877325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.877358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.877605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.877637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.877745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.877776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.878005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.878037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.878222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.878254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.878435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.878468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.878602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.878634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.878766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.878798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.878936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.878971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.879146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.879177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.879297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.879328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.879462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.879495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.879729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.879761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.880023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.880056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.880179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.880210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.880324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.880355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.880497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.880529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.880708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.880740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.880859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.880890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.881087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.881119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.881221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.881253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.881372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.881413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.881585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.881616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.881798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.881830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.881997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.882069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.882213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.882248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.882356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.882402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.882522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.882555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.882749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.882782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.882890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.882921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.883087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.883119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.883246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.883279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.883398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.883431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.883556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.883589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.883707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.883739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.883932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.883964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.884133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.884165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.884351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.884403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.884660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.737 [2024-10-08 18:36:22.884693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.737 qpair failed and we were unable to recover it. 00:28:29.737 [2024-10-08 18:36:22.884801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.884834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.885082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.885115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.885286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.885317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.885529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.885564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.885669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.885703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.885948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.885980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.886114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.886146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.886290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.886322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.886476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.886510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.886716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.886748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.886861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.886893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.887060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.887093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.887282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.887314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.887441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.887473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.887657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.887690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.887948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.887981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.888162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.888199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.888396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.888430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.888559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.888592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.888711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.888744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.888940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.888973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.889098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.889130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.889249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.889280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.889469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.889504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.889617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.889649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.889770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.889808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.890002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.890034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.890142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.890175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.890394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.890427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.890538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.890571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.890753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.890786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.890893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.890925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.891045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.891078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.891181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.891214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.891357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.891399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.891525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.891560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.891672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.891705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.891888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.891920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.892114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.892147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.892256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.738 [2024-10-08 18:36:22.892289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.738 qpair failed and we were unable to recover it. 00:28:29.738 [2024-10-08 18:36:22.892423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.892457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.892580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.892612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.892790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.892828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.892997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.893030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.893210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.893243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.893424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.893458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.893594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.893626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.893756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.893788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.893964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.893995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.894109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.894141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.894331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.894363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.894546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.894578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.894693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.894726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.894843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.894877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.894994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.895026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.895134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.895165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.895277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.895310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.895421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.895454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.895632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.895664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.895773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.895805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.895979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.896010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.896120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.896152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.896273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.896305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.896494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.896527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.896651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.896683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.896853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.896891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.897003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.897036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.897145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.897178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.897278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.897309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.897483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.897516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.897689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.897722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.897827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.897860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.898035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.898068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.898198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.898231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.898428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.898462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.898645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.898677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.898806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.898837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.898958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.898990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.899096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.899127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.739 [2024-10-08 18:36:22.899248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.739 [2024-10-08 18:36:22.899281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.739 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.899408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.899442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.899613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.899645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.899767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.899799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.899924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.899957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.900063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.900094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.900228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.900261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.900370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.900421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.900540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.900572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.900747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.900779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.900954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.900986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.901101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.901133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.901309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.901340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.901508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.901579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.901727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.901763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.901889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.901924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.902113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.902146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.902324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.902357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.902498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.902532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.902662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.902693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.902805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.902837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.902950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.902983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.903181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.903213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.903399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.903432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.903564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.903597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.903769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.903800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.903977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.904024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.904141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.904173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.904348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.904391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.904516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.904549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.904660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.904693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.904810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.904841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.904951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.904983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.905104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.905136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.905389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.905423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.905599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.905632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.905813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.740 [2024-10-08 18:36:22.905844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.740 qpair failed and we were unable to recover it. 00:28:29.740 [2024-10-08 18:36:22.905957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.905989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.906104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.906136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.906262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.906293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.906433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.906467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.906687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.906719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.906975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.907007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.907247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.907279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.907478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.907511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.907652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.907684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.908285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.908324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.908524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.908559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.908678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.908709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.908917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.908948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.909057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.909088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.909295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.909325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.909513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.909545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.909657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.909690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.909801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.909834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.909950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.909982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.910097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.910129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.910399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.910433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.910618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.910651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.910918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.910950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.911064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.911096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.911219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.911251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.911395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.911428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.911551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.911582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.911749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.911780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.911966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.911998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.912123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.912171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.912279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.912310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.912425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.912458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.912657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.912689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.912801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.912832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.912999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.913031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.913152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.913183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.913290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.913323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.913509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.913542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.913733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.913766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.913970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.741 [2024-10-08 18:36:22.914002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.741 qpair failed and we were unable to recover it. 00:28:29.741 [2024-10-08 18:36:22.914117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.914148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.914254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.914286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.914408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.914441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.914580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.914610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.914720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.914751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.914877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.914908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.915084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.915117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.915356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.915398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.915591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.915623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.915869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.915901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.916071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.916102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.916231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.916263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.916509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.916543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.916748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.916780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.916957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.916989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.917162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.917194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.917319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.917352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.917621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.917653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.917899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.917930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.918104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.918136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.918313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.918344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.918571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.918605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.918777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.918808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.918981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.919012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.919189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.919220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.919402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.919435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.919552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.919584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.919803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.919834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.920023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.920055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.920188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.920225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.920427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.920461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.920632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.920664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.920849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.920881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.921058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.921089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.921209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.921241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.921357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.921399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.921576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.921608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.921717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.921749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.921989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.922021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.922226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-10-08 18:36:22.922257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.742 qpair failed and we were unable to recover it. 00:28:29.742 [2024-10-08 18:36:22.922447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.922480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.922668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.922701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.922873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.922905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.923148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.923180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.923443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.923476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.923667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.923698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.923875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.923907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.924144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.924175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.924280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.924312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.924496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.924529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.924767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.924799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.924971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.925002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.925171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.925204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.925321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.925352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.925548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.925580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.925780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.925813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.926007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.926038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.926220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.926252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.926540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.926574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.926781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.926813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.926988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.927020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.927192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.927224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.927409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.927442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.927626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.927659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.927848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.927880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.927997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.928028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.928208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.928239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.928422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.928456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.928735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.928766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.928960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.928998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.929187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.929218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.929339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.929370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.929655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.929687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.929891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.929922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.930029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.930060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.930188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.930219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.930394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.930427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.930552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.930583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.930822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.930853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.930969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-10-08 18:36:22.931000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.743 qpair failed and we were unable to recover it. 00:28:29.743 [2024-10-08 18:36:22.931172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.931209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.931329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.931360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.931568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.931600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.931779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.931810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.931993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.932024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.932203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.932235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.932495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.932528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.932777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.932808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.932940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.932972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.933160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.933191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.933406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.933438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.933726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.933758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.934037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.934068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.934239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.934270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.934511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.934544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.934784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.934815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.935059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.935091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.935270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.935302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.935586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.935619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.935811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.935842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.935985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.936017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.936124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.936156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.936397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.936430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.936537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.936567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.936781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.936812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.936980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.937011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.937222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.937253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.937423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.937457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.937570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.937601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.937795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.937831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.937974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.938005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.938190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.938222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.938397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.938429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.938536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.938567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.938795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.938827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.938950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.744 [2024-10-08 18:36:22.938981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.744 qpair failed and we were unable to recover it. 00:28:29.744 [2024-10-08 18:36:22.939099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.939131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.939394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.939427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.939623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.939655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.939825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.939857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.940145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.940178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.940361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.940416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.940624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.940656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.940925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.940958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.941194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.941226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.941471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.941505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.941702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.941734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.941997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.942029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.942200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.942232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.942408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.942441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.942566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.942597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.942729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.942761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.942945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.942976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.943154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.943186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.943313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.943344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.943596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.943629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.943870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.943919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.944126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.944155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.944351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.944389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.944584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.944611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.944874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.944901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.945024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.945050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.945311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.945338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.945541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.945568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.945824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.945851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.946033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.946059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.946186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.946213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.946467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.946495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.946683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.946710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.946887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.946914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.947039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.947065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.947192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.947219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.947339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.947366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.947611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.947637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.947801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.947828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.947994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.948020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.745 qpair failed and we were unable to recover it. 00:28:29.745 [2024-10-08 18:36:22.948277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.745 [2024-10-08 18:36:22.948303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.948421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.948452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.948637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.948665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.948852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.948878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.949064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.949091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.949216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.949243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.949543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.949570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.949675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.949713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.949896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.949922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.950103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.950129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.950303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.950329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.950524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.950552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.950732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.950760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.951022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.951048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.951236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.951262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.951506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.951534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.951719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.951745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.951917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.951943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.952122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.952148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.952341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.952367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.952506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.952532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.952714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.952741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.952984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.953011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.953210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.953235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.953416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.953443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.953560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.953588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.953760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.953786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.953955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.953981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.954102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.954129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.954306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.954333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.954576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.954603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.954863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.954889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.955123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.955149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.955400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.955428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.955596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.955627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.955817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.955843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.956018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.956045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.956287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.956314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.956456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.956483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.956669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.956695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.956810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.746 [2024-10-08 18:36:22.956836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.746 qpair failed and we were unable to recover it. 00:28:29.746 [2024-10-08 18:36:22.957073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.957099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.957292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.957319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.957536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.957563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.957735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.957761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.957876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.957905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.958081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.958107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.958353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.958387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.958681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.958747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.958976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.959022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.959246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.959277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.959514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.959551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.959747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.959781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.959939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.959974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.960243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.960277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.960518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.960552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.960824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.960860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.961148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.961183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.961390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.961431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.961681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.961715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.961897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.961932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.962173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.962228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.962393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.962428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.962623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.962659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.962853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.962884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.963059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.963096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.963212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.963242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.963500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.963541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.963710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.963742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.964012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.964047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.964231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.964265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.964544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.964585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.964796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.964829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.965091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.965126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.965262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.965293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.965575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.965614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.965747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.965784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.965980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.966014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.966286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.966320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.966530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.966569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.966767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.966800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.747 qpair failed and we were unable to recover it. 00:28:29.747 [2024-10-08 18:36:22.967023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.747 [2024-10-08 18:36:22.967057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.967351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.967409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.967681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.967713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.967920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.967955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.968095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.968127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.968404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.968441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.968686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.968721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.968863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.968896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.969040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.969074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.969191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.969222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.969408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.969444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.969637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.969679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.969861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.969901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.970105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.970143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.970435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.970476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.970663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.970698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.970897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.970931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.971123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.971157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.971432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.971470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.971727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.971762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.971943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.971985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.972118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.972160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.972404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.972448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.972656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.972694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.972900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.972942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.973195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.973233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.973390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.973423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.973630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.973666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.973839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.973874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.974135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.974170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.974436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.974492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.974689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.974724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.974996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.975032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.975227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.975262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.975472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.975513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.975767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.975811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.976080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.976117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.976315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.976350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.976551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.976593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.976843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.976890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.748 [2024-10-08 18:36:22.977158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.748 [2024-10-08 18:36:22.977193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.748 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.977334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.977392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.977584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.977618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.977839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.977873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.978056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.978092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.978290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.978326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.978574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.978611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.978900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.978970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.979199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.979236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.979502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.979539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.979798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.979830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.980018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.980051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.980251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.980283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.980544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.980577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.980760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.980793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.980899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.980930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.981104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.981136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.981271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.981302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.981540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.981573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.981751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.981783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.981972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.982013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.982188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.982220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.982403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.982437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.982619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.982669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.982776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.982807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.983018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.983051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.983246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.983279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.983466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.983499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.983692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.983723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.983911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.983944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.984150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.984181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.984294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.984326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.984600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.984634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.984900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.984931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.985106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.985139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.985340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.985372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.985647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.749 [2024-10-08 18:36:22.985679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.749 qpair failed and we were unable to recover it. 00:28:29.749 [2024-10-08 18:36:22.985855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.985886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.986106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.986137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.986271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.986302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.986479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.986512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.986750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.986782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.986892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.986932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.987143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.987174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.987347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.987390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.987578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.987609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.987807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.987839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.988046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.988086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.988227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.988255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.988517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.988548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.988746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.988772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.988948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.988974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.989147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.989173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.989361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.989399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.989650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.989676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.989798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.989825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.990071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.990099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.990333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.990358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.990603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.990630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.990813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.990839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.991077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.991102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.991216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.991245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.991492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.991521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.991758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.991785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.991900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.991928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.992127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.992154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.992271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.992297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.992554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.992581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.992814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.992841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.993022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.993049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.993227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.993254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.993509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.993538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.993668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.993695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.993958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.993985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.994110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.994142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.994259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.994286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.994399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.994432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.750 [2024-10-08 18:36:22.994611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.750 [2024-10-08 18:36:22.994637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.750 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.994739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.994771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.994897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.994923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.995111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.995137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.995347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.995373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.995647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.995674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.995917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.995944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.996184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.996211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.996398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.996426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.996605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.996631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.996895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.996920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.997113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.997139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.997250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.997278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.997394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.997428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.997607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.997634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.997757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.997782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.997954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.997980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.998172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.998199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.998306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.998336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.998449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.998482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.998652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.998678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.998935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.998961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.999128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.999154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.999331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.999357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.999655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.999682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:22.999876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:22.999902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:23.000149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:23.000175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:23.000353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:23.000389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:23.000569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:23.000595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:23.000779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:23.000804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:23.000992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:23.001018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:23.001297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:23.001323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:23.001456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:23.001484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:23.001718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:23.001745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:23.001877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:23.001904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:23.002073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:23.002099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:23.002356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:23.002390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:23.002571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:23.002597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:23.002829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:23.002861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:23.003046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.751 [2024-10-08 18:36:23.003073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.751 qpair failed and we were unable to recover it. 00:28:29.751 [2024-10-08 18:36:23.003251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.003277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.003419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.003448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.003710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.003736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.003985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.004012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.004247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.004273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.004400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.004432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.004654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.004680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.004871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.004897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.005072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.005099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.005361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.005393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.005631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.005657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.005829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.005856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.006044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.006070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.006184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.006209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.006415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.006443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.006648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.006675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.006871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.006897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.007099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.007127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.007245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.007272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.007534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.007562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.007694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.007720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.007888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.007915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.008031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.008058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.008175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.008202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.008319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.008347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.008543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.008574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.008761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.008787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.008956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.008982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.009096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.009128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.009258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.009284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.009564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.009592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.009721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.009747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.009865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.009892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.010099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.010125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.010241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.010268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.010386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.010417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.010595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.010621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.010824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.010851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.010968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.010994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.011189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.752 [2024-10-08 18:36:23.011215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.752 qpair failed and we were unable to recover it. 00:28:29.752 [2024-10-08 18:36:23.011409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.753 [2024-10-08 18:36:23.011437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.753 qpair failed and we were unable to recover it. 00:28:29.753 [2024-10-08 18:36:23.011610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.753 [2024-10-08 18:36:23.011636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.753 qpair failed and we were unable to recover it. 00:28:29.753 [2024-10-08 18:36:23.011753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.753 [2024-10-08 18:36:23.011780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.753 qpair failed and we were unable to recover it. 00:28:29.753 [2024-10-08 18:36:23.011949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.753 [2024-10-08 18:36:23.011977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.753 qpair failed and we were unable to recover it. 00:28:29.753 [2024-10-08 18:36:23.012154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.753 [2024-10-08 18:36:23.012181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.753 qpair failed and we were unable to recover it. 00:28:29.753 [2024-10-08 18:36:23.012432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.753 [2024-10-08 18:36:23.012459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.753 qpair failed and we were unable to recover it. 00:28:29.753 [2024-10-08 18:36:23.012585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.753 [2024-10-08 18:36:23.012611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.753 qpair failed and we were unable to recover it. 00:28:29.753 [2024-10-08 18:36:23.012722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.753 [2024-10-08 18:36:23.012750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.753 qpair failed and we were unable to recover it. 00:28:29.753 [2024-10-08 18:36:23.012983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.753 [2024-10-08 18:36:23.013009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.753 qpair failed and we were unable to recover it. 00:28:29.753 [2024-10-08 18:36:23.013127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.753 [2024-10-08 18:36:23.013155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.753 qpair failed and we were unable to recover it. 00:28:29.753 [2024-10-08 18:36:23.013275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.753 [2024-10-08 18:36:23.013302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:29.753 qpair failed and we were unable to recover it. 00:28:29.753 [2024-10-08 18:36:23.013474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.753 [2024-10-08 18:36:23.013502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.031 qpair failed and we were unable to recover it. 00:28:30.031 [2024-10-08 18:36:23.013621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.031 [2024-10-08 18:36:23.013648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.031 qpair failed and we were unable to recover it. 00:28:30.031 [2024-10-08 18:36:23.013832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.031 [2024-10-08 18:36:23.013860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.031 qpair failed and we were unable to recover it. 00:28:30.031 [2024-10-08 18:36:23.014083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.031 [2024-10-08 18:36:23.014109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.031 qpair failed and we were unable to recover it. 00:28:30.031 [2024-10-08 18:36:23.014364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.031 [2024-10-08 18:36:23.014403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.031 qpair failed and we were unable to recover it. 00:28:30.031 [2024-10-08 18:36:23.014522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.031 [2024-10-08 18:36:23.014549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.031 qpair failed and we were unable to recover it. 00:28:30.031 [2024-10-08 18:36:23.014755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.031 [2024-10-08 18:36:23.014781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.031 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.014959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.014986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.015111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.015137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.015317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.015343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.015477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.015504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.015688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.015715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.015889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.015915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.016090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.016115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.016402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.016430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.016532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.016571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.016770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.016796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.017035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.017060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.017194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.017221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.017392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.017420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.017680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.017706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.017813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.017843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.017970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.017997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.018251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.018277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.018451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.018478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.018609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.018636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.018803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.018829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.018961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.018988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.019160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.019187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.019406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.019434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.019560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.019586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.019703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.019732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.019839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.019868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.020069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.020096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.020213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.020239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.020434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.020461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.020658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.020684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.020863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.020890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.021084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.021111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.021283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.021309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.021564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.021592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.021777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.021803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.022039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.022070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.022324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.022351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.022547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.022573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.022841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.032 [2024-10-08 18:36:23.022868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.032 qpair failed and we were unable to recover it. 00:28:30.032 [2024-10-08 18:36:23.023035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.023062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.023177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.023204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.023318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.023346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.023567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.023617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.023873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.023907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.024096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.024128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.024399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.024433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.024669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.024702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.024813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.024845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.025038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.025069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.025250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.025281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.025511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.025546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.025717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.025748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.025985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.026017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.026203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.026234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.026471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.026504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.026629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.026660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.026895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.026927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.027140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.027171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.027352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.027397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.027591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.027623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.027809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.027840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.027978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.028008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.028280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.028320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.028520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.028554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.028685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.028715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.028955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.028986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.029109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.029141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.029401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.029434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.029536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.029567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.029685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.029715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.029949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.029981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.030155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.030187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.030365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.030412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.030585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.030616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.030900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.030933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.031108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.031139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.031325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.031357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.031609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.031640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.033 qpair failed and we were unable to recover it. 00:28:30.033 [2024-10-08 18:36:23.031769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.033 [2024-10-08 18:36:23.031799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.031997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.032029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.032279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.032311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.032513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.032547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.032744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.032776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.033003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.033034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.033205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.033237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.033420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.033453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.033742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.033774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.033916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.033947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.034144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.034174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.034399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.034432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.034624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.034656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.034792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.034824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.035019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.035051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.035234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.035266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.035453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.035485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.035610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.035642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.035777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.035809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.035904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.035936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.036177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.036208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.036394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.036426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.036543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.036575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.036761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.036792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.037006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.037043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.037280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.037318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.037472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.037506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.037690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.037720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.037898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.037929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.038122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.038154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.038323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.038354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.038629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.038661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.038901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.038933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.039149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.039180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.039363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.039406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.039615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.039647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.039861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.039892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.040008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.040040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.040164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.040195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.034 qpair failed and we were unable to recover it. 00:28:30.034 [2024-10-08 18:36:23.040328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.034 [2024-10-08 18:36:23.040363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.040598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.040629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.040812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.040844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.040976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.041008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.041189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.041221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.041412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.041446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.041559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.041591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.041822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.041853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.042061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.042092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.042326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.042357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.042540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.042572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.042760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.042791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.043014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.043046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.043171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.043202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.043418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.043452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.043636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.043669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.043839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.043870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.044040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.044072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.044186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.044218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.044393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.044426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.044611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.044644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.044747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.044778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.044900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.044932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.045198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.045229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.045465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.045499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.045612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.045656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.045779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.045811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.046067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.046100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.046218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.046250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.046519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.046552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.046661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.046693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.046931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.046962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.047165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.047197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.047412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.047444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.047565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.047597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.047725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.047757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.047990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.048021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.048130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.048161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.048331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.048362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.048624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.048657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.035 qpair failed and we were unable to recover it. 00:28:30.035 [2024-10-08 18:36:23.048899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.035 [2024-10-08 18:36:23.048931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.049051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.049083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.049260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.049292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.049530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.049564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.049825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.049857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.050039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.050071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.050214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.050245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.050487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.050519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.050775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.050807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.050990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.051021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.051207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.051238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.051435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.051467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.051665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.051697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.051882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.051913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.052028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.052060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.052228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.052259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.052458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.052490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.052682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.052713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.052830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.052861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.053103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.053135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.053353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.053395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.053576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.053608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.053870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.053902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.054103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.054134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.054329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.054361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.054624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.054661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.054925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.054956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.055133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.055165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.055296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.055328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.055537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.055571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.055750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.055782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.056049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.056081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.056182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.056214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.056434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.056467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.036 [2024-10-08 18:36:23.056634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.036 [2024-10-08 18:36:23.056666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.036 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.056850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.056882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.056998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.057029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.057209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.057241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.057437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.057470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.057656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.057688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.057875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.057907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.058044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.058076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.058272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.058303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.058490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.058522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.058713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.058745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.058859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.058889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.059153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.059184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.059432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.059465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.059652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.059683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.059860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.059892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.060118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.060151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.060333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.060365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.060670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.060703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.060838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.060868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.061076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.061108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.061292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.061323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.061469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.061501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.061782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.061815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.062005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.062037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.062246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.062277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.062469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.062502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.062622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.062654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.062903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.062933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.063123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.063154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.063333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.063365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.063577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.063615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.063858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.063889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.064083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.064115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.064294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.064325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.064451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.064484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.064752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.064784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.065017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.065049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.065169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.065200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.065452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.037 [2024-10-08 18:36:23.065485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.037 qpair failed and we were unable to recover it. 00:28:30.037 [2024-10-08 18:36:23.065703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.065734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.065851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.065882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.066121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.066154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.066327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.066358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.066607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.066640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.066856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.066889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.067076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.067108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.067295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.067327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.067611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.067644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.067829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.067861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.068048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.068080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.068288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.068319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.068575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.068608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.068788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.068820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.068935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.068966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.069206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.069238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.069476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.069509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.069693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.069724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.069936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.069978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.070173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.070201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.070384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.070413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.070530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.070556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.070796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.070823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.071060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.071086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.071268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.071294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.071412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.071443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.071623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.071650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.071834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.071860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.072038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.072065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.072191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.072218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.072395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.072423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.072625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.072651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.072880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.072906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.073030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.073057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.073234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.073260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.073428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.073475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.073674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.073710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.073884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.073916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.074154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.074186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.074362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.074402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.038 qpair failed and we were unable to recover it. 00:28:30.038 [2024-10-08 18:36:23.074607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.038 [2024-10-08 18:36:23.074639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.074875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.074907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.075107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.075139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.075253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.075285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.075556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.075589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.075800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.075831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.075969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.075996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.076255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.076282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.076449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.076477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.076611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.076637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.076874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.076900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.077088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.077115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.077290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.077316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.077502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.077529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.077711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.077758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.078023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.078064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.078278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.078320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.078529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.078556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.078737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.078765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.078987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.079013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.079217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.079243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.079371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.079413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.079650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.079691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.079896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.079935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.080206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.080247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.080408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.080459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.080683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.080722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.080944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.080983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.081205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.081245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.081482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.081518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.081640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.081671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.081906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.081938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.082054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.082091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.082358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.082398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.082585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.082617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.082787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.082819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.082998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.083029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.083215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.083247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.083440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.083473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.083714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.039 [2024-10-08 18:36:23.083745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.039 qpair failed and we were unable to recover it. 00:28:30.039 [2024-10-08 18:36:23.083880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.083912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.084153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.084185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.084318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.084351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.084487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.084519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.084647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.084679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.084915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.084947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.085164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.085195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.085311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.085343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.085580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.085613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.085797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.085829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.086012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.086043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.086305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.086337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.086465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.086497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.086679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.086711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.086955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.086986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.087168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.087200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.087394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.087434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.087615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.087647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.087832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.087863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.088002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.088035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.088316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.088348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.088493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.088525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.088647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.088679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.088950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.088982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.089168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.089200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.089320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.089351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.089573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.089605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.089861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.089893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.090074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.090105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.090285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.090317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.090570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.090604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.090716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.090748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.090962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.090999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.091190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.091222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.091410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.091442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.040 qpair failed and we were unable to recover it. 00:28:30.040 [2024-10-08 18:36:23.091568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.040 [2024-10-08 18:36:23.091600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.091773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.091805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.092010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.092041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.092179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.092211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.092400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.092433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.092698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.092730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.092910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.092942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.093063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.093095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.093331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.093364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.093494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.093527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.093696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.093727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.093931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.093963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.094148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.094180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.094314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.094345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.094485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.094520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.094718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.094745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.094930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.094956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.095140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.095166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.095291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.095318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.095437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.095474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.095707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.095733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.095994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.096033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.096185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.096233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.096404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.096454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.096652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.096678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.096796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.096822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.097086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.097125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.097282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.097332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.097635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.097676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.097843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.097884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.098093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.098133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.098350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.098404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.098595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.098621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.098752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.098778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.098983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.099010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.099125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.099152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.099280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.099306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.099497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.099525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.099669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.099696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.099889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.041 [2024-10-08 18:36:23.099928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.041 qpair failed and we were unable to recover it. 00:28:30.041 [2024-10-08 18:36:23.100138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.100177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.100475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.100516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.100722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.100761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.100982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.101022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.101222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.101256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.101385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.101417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.101659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.101690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.101866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.101896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.102100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.102131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.102250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.102281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.102463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.102496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.102685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.102728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.102988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.103019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.103141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.103172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.103312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.103344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.103610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.103642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.103767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.103799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.104061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.104092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.104225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.104257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.104363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.104407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.104538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.104570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.104803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.104835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.105098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.105129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.105309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.105341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.105478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.105510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.105620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.105652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.105886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.105917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.106158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.106189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.106373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.106416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.106536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.106567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.106683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.106715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.106950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.106981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.107153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.107184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.107360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.107409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.107523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.107554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.107822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.107854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.108042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.108074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.108263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.108294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.108479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.108520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.108758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.042 [2024-10-08 18:36:23.108790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.042 qpair failed and we were unable to recover it. 00:28:30.042 [2024-10-08 18:36:23.108967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.108998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.109254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.109285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.109468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.109500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.109756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.109787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.109969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.110001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.110214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.110246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.110387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.110418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.110537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.110569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.110690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.110722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.110909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.110941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.111082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.111113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.111229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.111267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.111472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.111506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.111672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.111703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.111965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.111995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.112254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.112287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.112418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.112451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.112679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.112711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.112833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.112864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.113065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.113095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.113275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.113307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.113443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.113476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.113760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.113792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.113999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.114030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.114299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.114331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.114536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.114569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.114757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.114788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.115030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.115062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.115277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.115309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.115538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.115570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.115763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.115795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.116036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.116068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.116239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.116270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.116467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.116500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.116688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.116720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.116845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.116876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.117060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.117094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.117355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.117396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.117636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.117667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.043 qpair failed and we were unable to recover it. 00:28:30.043 [2024-10-08 18:36:23.117851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.043 [2024-10-08 18:36:23.117883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.118087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.118119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.118250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.118281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.118548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.118581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.118817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.118848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.119027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.119058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.119178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.119209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.119454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.119487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.119672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.119704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.119875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.119907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.120035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.120066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.120372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.120421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.120596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.120633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.120819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.120851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.121093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.121124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.121238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.121269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.121506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.121540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.121645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.121676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.121872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.121904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.122144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.122176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.122364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.122402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.122534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.122572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.122750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.122782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.122890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.122921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.123021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.123053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.123300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.123331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.123592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.123624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.123761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.123792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.123966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.123998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.124249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.124280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.124455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.124487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.124747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.124779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.124908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.124940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.125150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.125181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.125365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.125408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.125676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.125708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.125973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.044 [2024-10-08 18:36:23.126004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.044 qpair failed and we were unable to recover it. 00:28:30.044 [2024-10-08 18:36:23.126187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.126219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.126351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.126393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.126662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.126693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.126875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.126907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.127086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.127118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.127287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.127318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.127494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.127527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.127716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.127747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.127917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.127949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.128064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.128096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.128275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.128306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.128427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.128461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.128634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.128666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.128925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.128956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.129138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.129170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.129351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.129397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.129660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.129692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.129881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.129912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.130086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.130117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.130243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.130275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.130398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.130430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.130612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.130644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.130757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.130789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.130903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.130935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.131197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.131229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.131366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.131405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.131594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.131626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.131735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.131767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.131986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.132018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.132191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.132223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.132409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.132442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.132624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.132657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.132794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.132825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.132939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.132971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.133153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.133185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.133287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.133318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.133508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.133540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.133782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.133814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.134005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.134036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.134216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.045 [2024-10-08 18:36:23.134248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.045 qpair failed and we were unable to recover it. 00:28:30.045 [2024-10-08 18:36:23.134426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.134458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.134738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.134769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.134908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.134940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.135068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.135100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.135363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.135402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.135524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.135555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.135686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.135718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.135834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.135864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.136073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.136104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.136296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.136328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.136519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.136551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.136768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.136801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.137039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.137069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.137335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.137365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.137562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.137595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.137859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.137896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.138085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.138117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.138386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.138418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.138682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.138714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.138844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.138875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.139117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.139148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.139424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.139458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.139643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.139674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.139922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.139954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.140137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.140168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.140354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.140408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.140678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.140710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.140991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.141022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.141234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.141271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.141432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.141464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.141668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.141700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.141884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.141915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.142096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.142128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.142316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.142349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.142500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.142531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.142700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.142732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.143008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.143041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.143227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.143258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.143395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.143428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.143611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.046 [2024-10-08 18:36:23.143643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.046 qpair failed and we were unable to recover it. 00:28:30.046 [2024-10-08 18:36:23.143835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.143866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.144033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.144064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.144261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.144293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.144595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.144627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.144761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.144793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.144985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.145017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.145189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.145220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.145394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.145432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.145669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.145701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.145879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.145910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.146030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.146062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.146251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.146283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.146479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.146511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.146778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.146810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.147047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.147079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.147205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.147247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.147430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.147463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.147720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.147752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.147924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.147955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.148124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.148155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.148421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.148455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.148588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.148621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.148808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.148840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.149077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.149109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.149308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.149340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.149534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.149566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.149743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.149774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.150019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.150050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.150235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.150266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.150393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.150426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.150607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.150638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.150818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.150849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.151059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.151091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.151223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.151254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.151504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.151537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.151724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.151755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.151941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.151972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.152153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.152184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.152357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.152405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.047 [2024-10-08 18:36:23.152601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.047 [2024-10-08 18:36:23.152632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.047 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.152813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.152844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.153010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.153041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.153194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.153235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.153366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.153401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.153596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.153623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.153805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.153832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.153965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.153991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.154100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.154128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.154396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.154424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.154628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.154655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.154785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.154811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.154939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.154966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.155209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.155236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.155415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.155443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.155557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.155585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.155792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.155818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.155943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.155970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.156258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.156283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.156487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.156516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.156705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.156731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.156965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.156992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.157187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.157214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.157414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.157441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.157637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.157663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.157870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.157896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.158071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.158099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.158227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.158252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.158388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.158416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.158587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.158613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.158715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.158753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.158931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.158957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.159056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.159087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.159268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.159295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.159415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.159445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.159563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.159590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.159753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.159779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.159981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.160007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.160126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.160152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.160257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.160289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.160477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.048 [2024-10-08 18:36:23.160505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.048 qpair failed and we were unable to recover it. 00:28:30.048 [2024-10-08 18:36:23.160679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.160705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.160965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.160992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.161225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.161252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.161396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.161424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.161605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.161631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.161840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.161866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.162046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.162073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.162252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.162278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.162409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.162437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.162638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.162665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.162770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.162799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.162920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.162945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.163134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.163175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.163337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.163393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.163610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.163649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.163869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.163908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.164131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.164177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.164447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.164482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.164682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.164709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.164894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.164921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.165106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.165132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.165303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.165329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.165504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.165532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.165663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.165706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.165905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.165944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.166152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.166191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.166393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.166420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.166696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.166722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.166918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.166943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.167187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.167213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.167437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.167464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.167727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.167754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.167868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.167898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.168163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.168190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.168319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.168346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.168487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.049 [2024-10-08 18:36:23.168515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.049 qpair failed and we were unable to recover it. 00:28:30.049 [2024-10-08 18:36:23.168778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.168804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.168989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.169029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.169194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.169239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.169409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.169459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.169628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.169655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.169823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.169849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.170120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.170146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.170340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.170388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.170554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.170600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.170819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.170859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.171154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.171193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.171418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.171446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.171631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.171656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.171793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.171820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.172082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.172122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.172340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.172389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.172653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.172680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.172804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.172830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.173067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.173106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.173320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.173346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.173659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.173701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.174011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.174057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.174317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.174356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.174520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.174573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.174835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.174860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.175040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.175066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.175182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.175209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.175443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.175470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.175596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.175623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.175743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.175770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.175953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.175980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.176149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.176175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.176352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.176401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.176620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.176659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.176879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.176919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.177200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.177238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.177532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.177573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.177788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.177827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.178117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.178156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.178388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.178416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.050 [2024-10-08 18:36:23.178604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.050 [2024-10-08 18:36:23.178631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.050 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.178893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.178919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.179158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.179185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.179300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.179330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.179536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.179564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.179833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.179860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.180042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.180068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.180311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.180338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.180538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.180571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.180701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.180727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.180910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.180937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.181051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.181078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.181206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.181232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.181400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.181428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.181543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.181570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.181805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.181831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.182020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.182046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.182268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.182294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.182548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.182575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.182786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.182814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.183005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.183030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.183226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.183252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.183424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.183452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.183582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.183609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.183792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.183819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.184058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.184084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.184200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.184227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.184459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.184486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.184762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.184802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.185111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.185150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.185389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.185433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.185603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.185645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.185862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.185888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.186073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.186112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.186383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.186410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.186583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.186610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.186795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.186822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.187112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.187152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.187352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.187418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.187637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.187683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.187892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.187919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.051 [2024-10-08 18:36:23.188174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.051 [2024-10-08 18:36:23.188201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.051 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.188394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.188421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.188612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.188638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.188753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.188779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.188954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.188979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.189092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.189118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.189290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.189316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.189444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.189472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.189597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.189628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.189867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.189905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.190140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.190180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.190331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.190396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.190582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.190609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.190794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.190820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.190999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.191025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.191191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.191217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.191451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.191478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.191594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.191620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.191879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.191905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.192027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.192060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.192230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.192257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.192495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.192523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.192717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.192743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.192924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.192950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.193209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.193236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.193483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.193511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.193634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.193662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.193898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.193925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.194165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.194191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.194291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.194322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.194568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.194595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.194857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.194884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.195049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.195075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.195314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.195341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.195481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.195508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.195611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.195642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.195837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.195864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.196029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.196055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.196235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.196263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.196464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.196491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.196677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.196703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.052 [2024-10-08 18:36:23.196908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.052 [2024-10-08 18:36:23.196934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.052 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.197130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.197156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.197321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.197347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.197472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.197499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.197698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.197724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.197840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.197866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.198047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.198073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.198323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.198349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.198508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.198580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.198728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.198767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.198954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.198986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.199113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.199146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.199341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.199393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.199641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.199673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.199871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.199912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.200188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.200219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.200404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.200439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.200694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.200726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.200911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.200943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.201134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.201175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.201431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.201465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.201613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.201654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.201928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.201962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.202084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.202116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.202309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.202353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.202549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.202584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.202805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.202838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.202980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.203020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.203213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.203244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.203414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.203451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.203666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.203698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.203947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.203983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.204245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.204280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.204476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.204510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.204725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.204760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.204940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.204973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.205223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.205258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.053 qpair failed and we were unable to recover it. 00:28:30.053 [2024-10-08 18:36:23.205398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.053 [2024-10-08 18:36:23.205434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.205712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.205747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.205881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.205922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.206118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.206151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.206391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.206427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.206626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.206657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.206947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.206982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.207186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.207221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.207403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.207436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.207561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.207600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.207719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.207751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.208018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.208089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.208309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.208346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.208604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.208641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.208772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.208801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.208971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.208998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.209182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.209209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.209373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.209407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.209643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.209668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.209861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.209888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.210075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.210102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.210274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.210300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.210579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.210606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.210799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.210825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.210935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.210964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.211206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.211232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.211401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.211430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.211634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.211661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.211779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.211808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.211987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.212013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.212272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.212298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.212467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.212494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.212676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.212703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.212833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.212859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.213033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.213060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.213247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.213273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.213462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.213489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.213607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.213634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.213756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.213789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.214065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.214092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.054 [2024-10-08 18:36:23.214328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.054 [2024-10-08 18:36:23.214355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.054 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.214604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.214631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.214849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.214875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.215065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.215091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.215280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.215306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.215484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.215511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.215631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.215658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.215838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.215864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.216103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.216130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.216369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.216408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.216526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.216552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.216679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.216706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.216837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.216863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.217048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.217073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.217329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.217356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.217607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.217634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.217857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.217883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.218007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.218034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.218154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.218181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.218305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.218331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.218590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.218617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.218801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.218829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.219013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.219039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.219245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.219271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.219411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.219438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.219622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.219649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.219939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.219966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.220095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.220120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.220300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.220327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.220461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.220488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.220748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.220774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.221030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.221055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.221237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.221263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.221445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.221472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.221590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.221620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.221857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.221882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.222077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.222104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.222281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.222307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.222490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.222517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.222571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0fbb0 (9): Bad file descriptor 00:28:30.055 [2024-10-08 18:36:23.222926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.055 [2024-10-08 18:36:23.222995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.055 qpair failed and we were unable to recover it. 00:28:30.055 [2024-10-08 18:36:23.223283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.223320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.223619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.223655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.223893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.223925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.224161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.224194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.224434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.224468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.224707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.224739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.224922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.224953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.225137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.225170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.225449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.225482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.225660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.225691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.225880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.225912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.226125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.226157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.226344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.226426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.226596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.226633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.226829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.226862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.227052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.227085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.227285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.227317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.227582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.227615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.227787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.227818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.227991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.228024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.228166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.228198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.228413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.228448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.228711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.228743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.228917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.228948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.229140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.229171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.229351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.229402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.229606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.229637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.229750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.229782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.229980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.230013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.230201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.230233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.230418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.230452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.230650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.230682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.230873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.230905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.231096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.231127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.231395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.231428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.231619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.231651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.231825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.231856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.231987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.232019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.232142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.232174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.232387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.056 [2024-10-08 18:36:23.232421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.056 qpair failed and we were unable to recover it. 00:28:30.056 [2024-10-08 18:36:23.232691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.232723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.232920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.232953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.233082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.233114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.233345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.233387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.233589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.233621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.233882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.233914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.234161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.234193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.234452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.234486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.234698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.234730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.235013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.235044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.235230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.235262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.235440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.235473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.235671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.235703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.235893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.235926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.236118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.236149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.236264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.236296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.236481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.236515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.236649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.236681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.236858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.236890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.237070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.237101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.237296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.237328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.237577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.237610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.237795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.237828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.238012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.238043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.238234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.238266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.238436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.238476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.238688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.238719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.238889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.238922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.239196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.239227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.239407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.239439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.239650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.239681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.239927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.239960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.240080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.240112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.240319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.240351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.240546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.240580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.057 qpair failed and we were unable to recover it. 00:28:30.057 [2024-10-08 18:36:23.240838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.057 [2024-10-08 18:36:23.240870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.241155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.241186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.241357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.241401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.241661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.241692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.241824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.241857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.242046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.242078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.242203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.242235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.242419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.242454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.242691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.242724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.242845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.242876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.243017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.243049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.243332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.243365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.243571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.243604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.243796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.243829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.243954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.243986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.244179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.244211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.244401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.244435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.244586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.244625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.244815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.244843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.245078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.245104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.245226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.245252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.245511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.245538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.245773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.245800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.245926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.245952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.246132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.246158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.246403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.246432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.246623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.246651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.246770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.246796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.246967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.246993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.247177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.247204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.247323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.247349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.247486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.247514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.247630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.247656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.247890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.247916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.248152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.248178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.248304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.248331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.248578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.248605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.248783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.248809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.248987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.249014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.249202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.058 [2024-10-08 18:36:23.249227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.058 qpair failed and we were unable to recover it. 00:28:30.058 [2024-10-08 18:36:23.249507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.249535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.249667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.249693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.249937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.249963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.250071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.250102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.250217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.250251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.250443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.250471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.250740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.250766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.250943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.250969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.251152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.251178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.251302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.251328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.251511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.251538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.251716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.251742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.251848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.251879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.252053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.252078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.252259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.252285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.252456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.252484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.252691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.252718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.252900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.252926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.253050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.253077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.253255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.253281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.253488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.253517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.253700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.253726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.253909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.253935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.254106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.254132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.254321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.254347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.254544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.254571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.254810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.254836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.255024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.255050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.255230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.255255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.255434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.255461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.255628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.255654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.255769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.255798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.255924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.255951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.256155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.256182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.256467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.256494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.256671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.256697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.256921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.256960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.257196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.257235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.257454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.257482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.257682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.257721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.059 qpair failed and we were unable to recover it. 00:28:30.059 [2024-10-08 18:36:23.257930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.059 [2024-10-08 18:36:23.257969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.258259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.258299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.258579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.258619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.258822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.258861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.259174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.259212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.259431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.259485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.259675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.259710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.259895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.259927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.260098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.260130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.260327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.260359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.260575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.260608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.260800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.260832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.260960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.260993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.261201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.261233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.261401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.261434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.261705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.261737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.261978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.262010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.262308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.262340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.262631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.262673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.262808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.262840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.263027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.263059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.263238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.263270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.263440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.263474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.263643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.263675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.263854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.263886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.264125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.264157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.264344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.264385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.264624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.264656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.264835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.264867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.265046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.265078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.265247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.265278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.265402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.265436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.265620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.265652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.265914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.265945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.266080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.266113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.266299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.266331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.266510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.266543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.266716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.266748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.266934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.266966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.267171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.060 [2024-10-08 18:36:23.267203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.060 qpair failed and we were unable to recover it. 00:28:30.060 [2024-10-08 18:36:23.267320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.267352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.267541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.267574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.267857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.267888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.268175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.268207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.268417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.268451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.268644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.268675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.268857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.268889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.269093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.269125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.269364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.269408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.269546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.269578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.269759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.269791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.270016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.270048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.270334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.270367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.270507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.270540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.270649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.270682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.270968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.271000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.271191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.271223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.271363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.271404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.271611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.271649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.271772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.271804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.271931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.271962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.272175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.272208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.272450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.272484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.272609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.272642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.272828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.272861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.273070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.273102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.273293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.273325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.273526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.273561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.273741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.273774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.274033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.274066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.274186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.274218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.274334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.274367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.274566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.274599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.274780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.274812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.274925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.274957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.275167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.275198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.275393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.275426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.275613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.275645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.275836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.275868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.275998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.061 [2024-10-08 18:36:23.276030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.061 qpair failed and we were unable to recover it. 00:28:30.061 [2024-10-08 18:36:23.276145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.276177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.276391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.276424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.276537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.276569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.276849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.276882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.277048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.277079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.277279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.277316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.277462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.277491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.277697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.277723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.277846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.277873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.278060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.278086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.278318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.278344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.278520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.278548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.278685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.278713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.278822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.278850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.279019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.279046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.279151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.279181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.279373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.279414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.279655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.279683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.279857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.279883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.280064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.280091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.280391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.280432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.280726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.280764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.280908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.280938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.281102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.281129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.281338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.281365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.281505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.281532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.281651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.281677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.281801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.281827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.282004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.282030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.282203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.282230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.282402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.282429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.282670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.282696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.282878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.282927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.283140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.283179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.283341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.283396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.283600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.283643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.283848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.062 [2024-10-08 18:36:23.283874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.062 qpair failed and we were unable to recover it. 00:28:30.062 [2024-10-08 18:36:23.283983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.284014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.284125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.284153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.284318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.284345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.284474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.284502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.284669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.284695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.284814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.284841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.285019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.285045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.285243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.285269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.285464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.285491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.285620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.285648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.285765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.285791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.285968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.285995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.286108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.286136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.286323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.286349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.286661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.286691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.286876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.286903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.287165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.287192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.287315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.287340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.287562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.287589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.287756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.287782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.287947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.287973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.288156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.288195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.288355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.288417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.288721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.288761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.288915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.288961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.289180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.289219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.289447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.289487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.289705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.289743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.289959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.289985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.290228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.290254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.290430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.290457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.290651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.290677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.290902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.290942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.291178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.291218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.291451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.291522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.291799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.291833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.292012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.292084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.292213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.292242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.292415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.292443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.292625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.292652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.292754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.063 [2024-10-08 18:36:23.292786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.063 qpair failed and we were unable to recover it. 00:28:30.063 [2024-10-08 18:36:23.292949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.292998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.293198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.293233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.293387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.293431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.293707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.293740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.293915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.293947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.294078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.294111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.294366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.294409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.294672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.294704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.294812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.294843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.295037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.295069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.295325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.295357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.295509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.295541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.295671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.295702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.295889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.295920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.296180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.296212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.296500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.296534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.296720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.296751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.296870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.296903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.297085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.297117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.297370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.297410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.297586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.297617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.297875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.297907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.298112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.298145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.298355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.298394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.298609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.298642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.298914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.298946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.299115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.299147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.299286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.299318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.299442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.299475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.299657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.299688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.299863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.299895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.300106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.300139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.300336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.300367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.300650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.300682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.300865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.300897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.301076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.301118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.301398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.301433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.301672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.301703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.301888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.301920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.302090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.064 [2024-10-08 18:36:23.302121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.064 qpair failed and we were unable to recover it. 00:28:30.064 [2024-10-08 18:36:23.302335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.302367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.302513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.302546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.302721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.302753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.302961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.302992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.303198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.303230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.303410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.303444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.303560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.303592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.303783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.303814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.303923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.303955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.304145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.304177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.304311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.304344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.304526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.304560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.304780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.304811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.304921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.304957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.305131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.305163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.305392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.305424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.305561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.305592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.305785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.305816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.306053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.306084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.306204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.306237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.306408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.306441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.306635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.306666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.306868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.306901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.307085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.307117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.307293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.307325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.307547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.307580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.307822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.307853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.308036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.308069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.308340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.308372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.308555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.308588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.308770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.308802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.309006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.309039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.309220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.309251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.309398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.309432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.309601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.309633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.309822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.309859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.310031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.310063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.310324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.310357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.310654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.310687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.310969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.065 [2024-10-08 18:36:23.311002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.065 qpair failed and we were unable to recover it. 00:28:30.065 [2024-10-08 18:36:23.311139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.311170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.311355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.311398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.311593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.311625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.311795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.311828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.312079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.312111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.312289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.312320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.312522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.312557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.312746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.312777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.313017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.313050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.313243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.313275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.313458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.313491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.313670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.313702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.313895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.313927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.314047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.314078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.314362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.314402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.314511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.314544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.314765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.314797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.314987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.315019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.315202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.315233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.315494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.315528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.315741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.315773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.315904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.315935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.316110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.316142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.316388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.316420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.316592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.316624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.316751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.316782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.316893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.316925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.317112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.317142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.317245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.317278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.317411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.317445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.317578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.317611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.317809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.317841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.318080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.318112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.318285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.318318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.318427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.318459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.318722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.318761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.066 qpair failed and we were unable to recover it. 00:28:30.066 [2024-10-08 18:36:23.318936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.066 [2024-10-08 18:36:23.318968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.319136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.319171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.319335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.319367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.319566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.319598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.319728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.319759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.319961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.319993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.320112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.320143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.320361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.320402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.320596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.320628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.320746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.320778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.320911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.320948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.321139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.321171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.321373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.321416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.321594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.321625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.321883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.321915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.322086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.322118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.322386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.322420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.322649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.322681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.322919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.322949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.323132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.323164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.323344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.323386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.323509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.323541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.323731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.323763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.323962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.323994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.324179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.324216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.324412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.324446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.324657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.324690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.324862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.324894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.325085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.325117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.325321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.325353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.325498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.325531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.325648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.325680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.325931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.325963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.326144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.326176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.326359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.326400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.326575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.326607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.326787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.067 [2024-10-08 18:36:23.326819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.067 qpair failed and we were unable to recover it. 00:28:30.067 [2024-10-08 18:36:23.327078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.327109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.327356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.327396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.327665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.327703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.327822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.327854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.327981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.328012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.328129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.328162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.328291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.328322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.328531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.328565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.328762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.328794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.329095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.329127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.329237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.329269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.329397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.329430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.329550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.329582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.329770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.329802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.329919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.329951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.330067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.330099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.330205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.330236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.330421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.330455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.330662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.330693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.330824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.330856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.331026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.331057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.331307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.331339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.331580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.331614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.331798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.331834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.331960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.331992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.332271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.332303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.332438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.332470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.332607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.332639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.332760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.332792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.333079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.333148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.068 [2024-10-08 18:36:23.333427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.068 [2024-10-08 18:36:23.333464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.068 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.333743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.333778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.333912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.333944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.334180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.334211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.334401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.334435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.334617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.334648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.334918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.334949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.335170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.335202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.335394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.335428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.335549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.335580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.335766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.335798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.336046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.336078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.336186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.336228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.336469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.336502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.336615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.336650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.336819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.336850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.336965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.336997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.337183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.337213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.337349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.337393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.337604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.337640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.337833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.337865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.337994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.338025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.338131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.338165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.338358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.338402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.338642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.338674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.338925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.338956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.339143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.339174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.339398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.339432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.339700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.339732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.339903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.339935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.340166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.340197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.340458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.340490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.340623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.340654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.340825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.348 [2024-10-08 18:36:23.340864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.348 qpair failed and we were unable to recover it. 00:28:30.348 [2024-10-08 18:36:23.341054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.341085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.341290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.341323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.341523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.341557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.341727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.341759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.341878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.341909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.342081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.342151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.342298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.342334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.342542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.342578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.342868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.342900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.343015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.343052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.343178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.343209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.343393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.343426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.343547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.343579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.343755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.343787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.343992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.344023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.344197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.344229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.344355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.344399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.344525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.344557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.344663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.344704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.344946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.344979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.345158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.345189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.345322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.345354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.345631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.345664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.345774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.345806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.345919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.345951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.346122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.346154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.346354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.346398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.346528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.346559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.346812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.346845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.347036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.347068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.347244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.347275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.347403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.347437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.347719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.347752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.347925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.347957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.348212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.348243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.348367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.348407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.348542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.348574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.348814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.348845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.349086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.349118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.349220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.349 [2024-10-08 18:36:23.349250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.349 qpair failed and we were unable to recover it. 00:28:30.349 [2024-10-08 18:36:23.349452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.349485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.349661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.349692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.349805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.349837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.350017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.350049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.350176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.350208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.350438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.350481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.350705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.350746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.351037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.351078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.351229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.351276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.351534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.351577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.351743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.351791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.352020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.352056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.352191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.352223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.352330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.352362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.352615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.352647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.352832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.352864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.352973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.353005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.353177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.353208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.353395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.353427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.353618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.353650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.353763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.353794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.354079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.354110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.354279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.354311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.354432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.354466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.354635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.354667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.354899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.354931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.355145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.355177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.355305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.355337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.355540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.355574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.355753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.355786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.355925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.355957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.356126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.356158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.356273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.356305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.356543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.356577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.356765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.356796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.356932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.356964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.357085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.357116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.357303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.357335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.357585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.357619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.357729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.350 [2024-10-08 18:36:23.357761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.350 qpair failed and we were unable to recover it. 00:28:30.350 [2024-10-08 18:36:23.357865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.357896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.358024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.358057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.358243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.358275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.358465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.358498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.358688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.358719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.358930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.358968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.359157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.359189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.359318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.359350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.359618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.359651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.359832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.359864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.360062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.360094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.360353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.360396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.360607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.360638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.360821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.360853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.360968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.361000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.361185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.361217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.361415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.361449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.361621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.361652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.361796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.361828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.362005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.362036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.362143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.362174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.362414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.362448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.362554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.362585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.362702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.362734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.362845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.362876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.363112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.363143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.363245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.363276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.363394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.363427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.363559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.363592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.363758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.363789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.364028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.364060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.364242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.364275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.364542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.364575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.364695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.364727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.364908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.364940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.365057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.365088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.365260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.365292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.365481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.365514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.365690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.351 [2024-10-08 18:36:23.365721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.351 qpair failed and we were unable to recover it. 00:28:30.351 [2024-10-08 18:36:23.365922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.365953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.366076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.366108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.366237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.366269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.366450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.366483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.366721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.366754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.366948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.366981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.367160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.367198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.367387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.367420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.367630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.367661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.367879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.367912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.368103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.368136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.368310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.368342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.368541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.368574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.368694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.368725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.368857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.368888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.369128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.369159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.369370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.369412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.369591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.369623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.369890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.369922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.370108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.370139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.370263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.370294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.370474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.370514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.370690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.370721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.370838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.370871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.371054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.371086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.371272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.371304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.371475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.371509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.371634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.371667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.371877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.371908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.372114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.372147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.372323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.372355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.372491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.372524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.372729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.372761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.352 [2024-10-08 18:36:23.372940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.352 [2024-10-08 18:36:23.372973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.352 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.373084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.373116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.373239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.373270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.373531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.373565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.373677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.373709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.373879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.373911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.374039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.374071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.374206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.374238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.374425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.374459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.374562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.374594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.374777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.374810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.375048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.375079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.375197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.375229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.375402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.375441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.375626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.375657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.375925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.375956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.376144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.376177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.376361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.376413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.376607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.376639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.376755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.376786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.377023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.377055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.377165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.377197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.377389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.377422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.377597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.377629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.377815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.377847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.377963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.377997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.378102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.378134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.378318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.378350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.378553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.378586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.378790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.378822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.378951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.378982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.379113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.379145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.379261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.379292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.379416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.379451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.379643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.379674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.379939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.379971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.380101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.380133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.380350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.380391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.380521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.380551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.380673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.380705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.353 qpair failed and we were unable to recover it. 00:28:30.353 [2024-10-08 18:36:23.380887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.353 [2024-10-08 18:36:23.380925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.381105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.381137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.381260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.381292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.381502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.381535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.381646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.381677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.381796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.381828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.382011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.382043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.382171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.382202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.382420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.382454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.382720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.382752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.382871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.382903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.383092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.383124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.383246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.383277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.383409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.383449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.383636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.383668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.383778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.383809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.383929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.383961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.384244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.384276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.384457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.384490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.384617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.384649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.384894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.384963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.385173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.385209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.385398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.385433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.385616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.385649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.385773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.385805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.386043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.386075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.386269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.386301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.386490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.386523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.386701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.386732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.386987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.387019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.387202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.387234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.387418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.387452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.387565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.387596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.387764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.387794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.387977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.388009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.388188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.388219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.388336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.388367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.388496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.388529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.388647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.388678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.388796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.354 [2024-10-08 18:36:23.388828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.354 qpair failed and we were unable to recover it. 00:28:30.354 [2024-10-08 18:36:23.389015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.389047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.389287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.389318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.389512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.389546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.389721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.389753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.389945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.389977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.390143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.390175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.390351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.390396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.390591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.390625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.390744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.390775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.390884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.390915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.391088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.391126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.391332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.391364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.391499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.391532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.391777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.391815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.391935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.391966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.392156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.392187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.392372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.392416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.392533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.392564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.392804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.392836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.393006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.393037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.393226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.393257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.393429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.393462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.393668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.393700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.393800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.393831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.393933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.393964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.394175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.394206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.394388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.394420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.394635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.394666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.394785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.394817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.394994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.395025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.395148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.395179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.395369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.395409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.395644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.395677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.395807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.395838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.396020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.396052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.396224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.396255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.396449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.396482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.396658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.396689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.396896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.355 [2024-10-08 18:36:23.396927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.355 qpair failed and we were unable to recover it. 00:28:30.355 [2024-10-08 18:36:23.397031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.397062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.397248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.397284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.397548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.397581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.397683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.397716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.397900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.397933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.398039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.398068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.398279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.398310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.398488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.398521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.398694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.398725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.398938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.398970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.399088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.399120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.399305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.399337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.399467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.399500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.399676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.399708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.399970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.400008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.400244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.400276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.400443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.400477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.400704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.400736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.400934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.400965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.401153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.401186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.401318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.401349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.401483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.401514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.401616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.401647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.401898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.401930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.402139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.402171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.402347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.402387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.402649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.402681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.402867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.402897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.403093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.403126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.403231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.403262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.403453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.403486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.403729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.403760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.403935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.403967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.404075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.404109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.404276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.356 [2024-10-08 18:36:23.404308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.356 qpair failed and we were unable to recover it. 00:28:30.356 [2024-10-08 18:36:23.404492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.404525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.404696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.404728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.404838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.404870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.405045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.405076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.405185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.405217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.405465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.405498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.405711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.405752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.405949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.405978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.406084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.406115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.406310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.406337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.406544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.406571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.406686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.406713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.406827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.406858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.407106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.407132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.407297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.407322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.407510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.407537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.407666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.407693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.407879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.407904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.408108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.408134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.408398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.408427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.408616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.408642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.408878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.408905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.409097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.409136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.409403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.409474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.409728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.409764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.409892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.409924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.410160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.410192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.410427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.410461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.410701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.410733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.410857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.410889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.411058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.411097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.411302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.411333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.411527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.411560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.411864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.411938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.412090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.412126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.412315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.412348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.412577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.412611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.412795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.412827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.412994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.413025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.413195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.357 [2024-10-08 18:36:23.413226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.357 qpair failed and we were unable to recover it. 00:28:30.357 [2024-10-08 18:36:23.413478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.413513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.413702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.413734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.413969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.414000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.414186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.414217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.414409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.414441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.414575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.414606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.414735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.414776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.414955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.414986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.415173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.415205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.415413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.415445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.415575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.415608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.415752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.415784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.415961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.415991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.416210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.416242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.416517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.416549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.416739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.416770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.416953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.416984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.417218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.417250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.417361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.417404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.417675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.417707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.417993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.418025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.418142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.418173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.418390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.418424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.418613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.418645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.418912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.418944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.419203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.419235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.419416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.419450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.419639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.419670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.419923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.419955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.420141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.420172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.420437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.420469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.420656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.420688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.420865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.420898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.421166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.421199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.421409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.421441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.421620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.421651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.421785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.421816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.421998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.422030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.422162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.358 [2024-10-08 18:36:23.422193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.358 qpair failed and we were unable to recover it. 00:28:30.358 [2024-10-08 18:36:23.422433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.422465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.422635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.422665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.422854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.422885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.423012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.423044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.423161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.423193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.423395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.423427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.423543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.423574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.423817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.423854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.424100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.424132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.424397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.424429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.424546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.424578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.424690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.424722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.424921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.424953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.425213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.425245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.425449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.425482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.425664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.425696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.425904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.425937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.426067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.426099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.426280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.426312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.426497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.426529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.426743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.426776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.427029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.427061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.427244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.427274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.427400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.427433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.427543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.427574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.427783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.427815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.427930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.427961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.428195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.428227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.428342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.428374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 583610 Killed "${NVMF_APP[@]}" "$@" 00:28:30.359 [2024-10-08 18:36:23.428611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.428643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.428758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.428791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.428930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.428962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.429175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.429210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.359 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 [2024-10-08 18:36:23.429471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.429543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:30.359 [2024-10-08 18:36:23.429767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.429803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:30.359 [2024-10-08 18:36:23.430018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.430052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:30.359 [2024-10-08 18:36:23.430298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.359 [2024-10-08 18:36:23.430331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.359 qpair failed and we were unable to recover it. 00:28:30.359 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.359 [2024-10-08 18:36:23.430610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.430645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.430822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.430854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.431067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.431099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.431266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.431299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.431562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.431597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.431735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.431768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.431896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.431929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.432109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.432144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.432419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.432453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.432639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.432671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.432910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.432942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.433127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.433160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.433395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.433428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.433617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.433649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.433915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.433948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.434146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.434179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.434363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.434408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.434614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.434647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.434853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.434885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.435016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.435048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.435168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.435200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.435415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.435454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.435697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.435724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.435905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.435932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.436121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.436149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.436315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.436341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.436485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.436512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.436690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.436716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.436975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.437000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=584405 00:28:30.360 [2024-10-08 18:36:23.437199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.437226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.437349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.437384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 584405 00:28:30.360 [2024-10-08 18:36:23.437572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:30.360 [2024-10-08 18:36:23.437599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 [2024-10-08 18:36:23.437847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.437874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 584405 ']' 00:28:30.360 [2024-10-08 18:36:23.438087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.438124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.360 [2024-10-08 18:36:23.438364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.438406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:30.360 [2024-10-08 18:36:23.438647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.360 [2024-10-08 18:36:23.438680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.360 qpair failed and we were unable to recover it. 00:28:30.360 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.360 [2024-10-08 18:36:23.438850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.361 [2024-10-08 18:36:23.438887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.439078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.439110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.439307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.439339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.361 [2024-10-08 18:36:23.439627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.439662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.439845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.439877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.440000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.440033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.440222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.440255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.440443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.440484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.440691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.440724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.440910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.440943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.441212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.441245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.441427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.441461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.441671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.441704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.441906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.441939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.442125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.442160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.442284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.442317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.442509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.442543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.442669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.442700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.442895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.442927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.443115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.443147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.443329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.443362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.443627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.443661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.443778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.443810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.444003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.444036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.444218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.361 [2024-10-08 18:36:23.444251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.361 qpair failed and we were unable to recover it. 00:28:30.361 [2024-10-08 18:36:23.444509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.444543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.444666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.444698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.444968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.444999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.445188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.445221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.445496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.445529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.445786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.445819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.446082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.446114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.446234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.446267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.446551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.446584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.446849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.446881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.447061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.447093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.447339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.447371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.447574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.447606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.447799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.447831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.447958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.447992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.448244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.448276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.448397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.448430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.448620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.448652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.448762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.448792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.448906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.448938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.449055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.449088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.449272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.449304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.449422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.449461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.449698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.449730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.449910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.449942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.450116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.450147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.450267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.450300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.450537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.450572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.450689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.450720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.450891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.450922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.451095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.451128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.451262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.451294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.451417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.451451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.451584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.451616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.451795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.451827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.452042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.452074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.452207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.362 [2024-10-08 18:36:23.452239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.362 qpair failed and we were unable to recover it. 00:28:30.362 [2024-10-08 18:36:23.452356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.452401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.452640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.452673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.452781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.452813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.453014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.453046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.453179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.453211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.453314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.453347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.453582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.453615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.453801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.453834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.454085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.454117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.454374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.454467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.454706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.454739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.455035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.455067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.455234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.455299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.455637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.455679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.455949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.455977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.456165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.456192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.456498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.456527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.456702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.456729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.457003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.457030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.457153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.457179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.457371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.457410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.457590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.457617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.457740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.457771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.457878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.457912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.458088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.458115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.458297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.458324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.458519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.458547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.458667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.458698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.458899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.458927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.459120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.459147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.459323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.459349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.459545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.459573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.459712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.459739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.363 qpair failed and we were unable to recover it. 00:28:30.363 [2024-10-08 18:36:23.459860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.363 [2024-10-08 18:36:23.459890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.460022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.460050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.460165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.460198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.460398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.460425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.460605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.460631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.460811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.460837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.461082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.461114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.461280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.461307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.461480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.461509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.461695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.461722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.461960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.461987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.462193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.462219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.462405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.462432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.462611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.462637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.462873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.462901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.463084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.463111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.463315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.463342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.463542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.463570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.463687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.463714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.463851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.463877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.464072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.464098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.464203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.464234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.464401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.464429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.464631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.464657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.464840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.464866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.465034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.465060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.465241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.465268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.465482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.465510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.465642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.465672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.465870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.465897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.466075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.466102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.466286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.466313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.466486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.466514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.466694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.466726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.466897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.466923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.467028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.467058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.467225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.467252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.364 [2024-10-08 18:36:23.467367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.364 [2024-10-08 18:36:23.467410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.364 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.467597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.467623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.467883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.467909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.468077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.468104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.468227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.468253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.468436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.468464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.468700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.468727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.468834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.468867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.469065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.469090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.469266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.469293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.469538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.469609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.469877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.469949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.470181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.470234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.470472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.470512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.470770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.470806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.470999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.471034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.471214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.471259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.471402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.471436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.471631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.471667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.471806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.471846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.472040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.472081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.472345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.472396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.472592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.472625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.472801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.472843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.473091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.473126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.473340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.473372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.473495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.473529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.473716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.473749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.473990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.474022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.474214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.474246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.474422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.474456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.365 qpair failed and we were unable to recover it. 00:28:30.365 [2024-10-08 18:36:23.474697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.365 [2024-10-08 18:36:23.474730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.474969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.475001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.475176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.475208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.475386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.475420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.475661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.475693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.475818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.475851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.476034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.476066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.476260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.476292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.476508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.476543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.476717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.476748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.477017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.477050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.477313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.477346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.477498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.477531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.477756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.477788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.477904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.477936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.478111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.478142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.478391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.478426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.478718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.478750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.478923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.478955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.479171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.479213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.479332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.479365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.479592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.479625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.479812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.479844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.480027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.480059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.480177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.480209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.480444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.480477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.480677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.480709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.480902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.480934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.481142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.481173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.481367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.481407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.481514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.366 [2024-10-08 18:36:23.481547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.366 qpair failed and we were unable to recover it. 00:28:30.366 [2024-10-08 18:36:23.481718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.481749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.481932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.481974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.482168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.482201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.482396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.482429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.482538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.482570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.482686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.482717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.482831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.482863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.483092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.483124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.483412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.483446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.483638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.483670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.483778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.483809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.484015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.484048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.484309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.484341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.484481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.484514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.484633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.484664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.484796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.484829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.485079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.485111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.485232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.485264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.485525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.485559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.485846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.485879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.486024] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:28:30.367 [2024-10-08 18:36:23.486065] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.367 [2024-10-08 18:36:23.486143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.486175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.486411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.486443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.486624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.486656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.486897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.486930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.487178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.487210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.487394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.487426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.487595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.487627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.487810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.487843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.487973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.488004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.488275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.488307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.488437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.488471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.488653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.488685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.488872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.488904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.489087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.489119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.489397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.489431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.489671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.489703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.489939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.489970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.367 [2024-10-08 18:36:23.490155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.367 [2024-10-08 18:36:23.490187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.367 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.490382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.490416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.490680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.490712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.490970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.491007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.491194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.491226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.491423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.491456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.491696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.491727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.491850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.491881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.492016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.492048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.492175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.492207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.492477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.492511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.492750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.492782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.492903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.492934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.493114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.493145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.493394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.493427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.493644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.493676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.493927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.493958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.494202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.494234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.494351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.494391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.494519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.494550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.494790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.494822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.494956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.494988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.495179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.495210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.495399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.495432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.495614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.495646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.495770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.495801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.496063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.496094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.496309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.496341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.496550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.496584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.496849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.496880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.497148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.368 [2024-10-08 18:36:23.497180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.368 qpair failed and we were unable to recover it. 00:28:30.368 [2024-10-08 18:36:23.497317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.497349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.497497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.497530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.497643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.497675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.497849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.497882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.497988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.498020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.498274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.498306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.498420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.498454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.498651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.498682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.498851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.498882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.499072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.499104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.499356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.499398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.499643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.499674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.499946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.499984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.500090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.500121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.500290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.500323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.500515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.500548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.500691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.500723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.500901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.500933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.501065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.501098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.501278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.501311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.501446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.501480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.501659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.501690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.501956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.501989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.502128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.502159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.502336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.502368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.502514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.502546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.502749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.502780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.502962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.502994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.503167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.503200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.503396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.503429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.503556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.503589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.503825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.503857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.504052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.504083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.504208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.504240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.504419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.504453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.504630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.504662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.504919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.504951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.505129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.505161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.505431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.505465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.369 [2024-10-08 18:36:23.505661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.369 [2024-10-08 18:36:23.505693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.369 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.505871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.505904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.506084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.506116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.506287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.506319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.506462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.506495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.506689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.506722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.506990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.507021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.507190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.507222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.507398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.507431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.507554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.507584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.507785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.507817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.508017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.508049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.508219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.508251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.508458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.508497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.508738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.508772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.508892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.508923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.509159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.509190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.509456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.509489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.509673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.509704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.509888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.509920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.510027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.510060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.510234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.510266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.510446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.510479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.510684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.510716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.510891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.510928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.511165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.511196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.511304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.511336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.511521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.511555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.511674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.511706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.511816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.511848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.512083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.512114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.512363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.512406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.512575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.512607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.512809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.512842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.512976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.513007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.513208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.513241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.513480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.513514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.513702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.513733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.513912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.370 [2024-10-08 18:36:23.513955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.370 qpair failed and we were unable to recover it. 00:28:30.370 [2024-10-08 18:36:23.514140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.514171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.514357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.514399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.514588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.514619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.514728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.514759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.514931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.514962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.515096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.515128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.515308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.515340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.515598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.515631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.515765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.515797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.515990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.516022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.516261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.516293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.516421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.516455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.516700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.516731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.516912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.516944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.517118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.517163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.517267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.517299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.517472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.517506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.517720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.517752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.517868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.517899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.518095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.518127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.518314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.518346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.518649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.518692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.518938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.518966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.519233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.519260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.519389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.519417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.519596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.519623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.519873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.519899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.520075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.520107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.520354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.520392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.520593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.520620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.520798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.520825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.521088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.521114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.521299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.521325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.521530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.521559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.521746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.521772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.371 [2024-10-08 18:36:23.521974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.371 [2024-10-08 18:36:23.522000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.371 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.522108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.522137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.522347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.522373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.522498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.522530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.522666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.522696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.522947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.522975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.523210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.523243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.523387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.523417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.523538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.523569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.523685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.523715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.523828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.523861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.524098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.524125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.524398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.524426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.524697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.524725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.524898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.524924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.525168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.525194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.525361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.525397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.525612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.525639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.525757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.525783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.525888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.525919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.526115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.526142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.526262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.526288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.526419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.526447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.526629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.526655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.526919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.526945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.527195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.527222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.527417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.527444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.527629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.527656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.527861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.527889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.528074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.528100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.528264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.528290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.528456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.528484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.528672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.528704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.528826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.528853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.529049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.529075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.529251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.529277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.529456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.529483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.529700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.529726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.529972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.529999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.530127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.530153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.372 [2024-10-08 18:36:23.530414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.372 [2024-10-08 18:36:23.530442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.372 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.530575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.530601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.530769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.530796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.530984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.531011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.531138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.531164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.531271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.531302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.531491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.531518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.531782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.531815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.532015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.532041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.532165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.532192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.532482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.532510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.532696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.532722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.532912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.532939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.533173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.533200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.533476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.533503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.533669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.533695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.533963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.533989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.534112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.534138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.534310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.534336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.534543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.534571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.534747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.534773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.534949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.534975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.535095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.535124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.535389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.535415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.535589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.535616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.535792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.535818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.535936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.535961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.536216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.536243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.536412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.536440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.536638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.536664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.536911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.536937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.537056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.537084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.537318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.537345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.537648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.537687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.537874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.537912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.538119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.538151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.538360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.538404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.538596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.538628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.538752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.538784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.538971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.539002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.373 qpair failed and we were unable to recover it. 00:28:30.373 [2024-10-08 18:36:23.539189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.373 [2024-10-08 18:36:23.539220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.539418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.539451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.539645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.539677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.539811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.539843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.539968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.540000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.540119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.540152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.540404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.540436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.540685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.540718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.540833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.540865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.541051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.541083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.541288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.541321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.541443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.541477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.541658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.541691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.541965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.541996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.542130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.542162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.542275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.542307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.542585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.542617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.542893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.542925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.543096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.543128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.543249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.543282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.543462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.543495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.543717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.543766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.543895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.543927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.544166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.544199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.544323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.544356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.544570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.544603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.544863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.544895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.544952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:30.374 [2024-10-08 18:36:23.545010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.545043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.545170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.545202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.545440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.545473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.545663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.545695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.545831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.545863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.546049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.546081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.546277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.546309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.546531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.546584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.546717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.546750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.546881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.546913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.547120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.547153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.547406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.547439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.374 [2024-10-08 18:36:23.547545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.374 [2024-10-08 18:36:23.547578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.374 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.547848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.547881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.548112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.548144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.548332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.548365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.548598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.548631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.548871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.548902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.549021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.549053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.549182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.549226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.549360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.549410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.549610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.549642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.549837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.549869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.550127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.550159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.550289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.550320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.550466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.550500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.550750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.550781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.551036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.551068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.551199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.551232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.551515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.551549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.551745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.551776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.551967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.551999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.552204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.552236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.552449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.552482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.552703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.552735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.552980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.553012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.553219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.553251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.553518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.553551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.553880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.553912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.554097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.554130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.554266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.554298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.554482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.554516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.554700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.554732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.554849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.554880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.555057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.555088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.555291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.555323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.555516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.555549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.555694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.555735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.375 qpair failed and we were unable to recover it. 00:28:30.375 [2024-10-08 18:36:23.556019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.375 [2024-10-08 18:36:23.556046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.556228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.556255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.556448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.556477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.556660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.556687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.556873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.556900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.557013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.557044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.557151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.557181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.557389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.557417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.557534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.557560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.557803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.557829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.558007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.558034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.558221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.558247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.558442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.558470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.558604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.558630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.558873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.558900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.559097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.559124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.559308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.559335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.559623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.559650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.559909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.559935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.560065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.560092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.560279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.560305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.560473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.560501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.560736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.560763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.560996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.561021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.561213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.561239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.561448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.561475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.561590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.561623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.561793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.561819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.561992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.562018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.562213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.562240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.562349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.562388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.562559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.562586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.562717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.562743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.562976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.563002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.563167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.563193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.563370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.563409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.563581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.563608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.563918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.563945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.564060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.564088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.564258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.376 [2024-10-08 18:36:23.564285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.376 qpair failed and we were unable to recover it. 00:28:30.376 [2024-10-08 18:36:23.564477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.564505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.564626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.564654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.564776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.564803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.565036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.565062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.565265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.565291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.565555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.565583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.565714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.565741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.565856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.565890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.566002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.566030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.566275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.566302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.566428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.566456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.566645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.566672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.566787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.566824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.567080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.567106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.567315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.567342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.567524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.567551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.567801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.567828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.567956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.567981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.568192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.568218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.568401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.568430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.568566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.568593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.568791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.568818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.568996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.569023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.569270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.569296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.569553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.569581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.569701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.569727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.569844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.569874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.570117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.570149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.570335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.570361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.570505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.570532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.570714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.570740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.571002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.571029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.571211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.571238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.571416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.571444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.571678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.571705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.571907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.571934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.572045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.572074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.572249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.572275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.572441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.572468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.572588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.572615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.572850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.572877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.377 qpair failed and we were unable to recover it. 00:28:30.377 [2024-10-08 18:36:23.573052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.377 [2024-10-08 18:36:23.573078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.573341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.573368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.573478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.573508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.573624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.573651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.573922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.573948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.574065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.574095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.574288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.574314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.574492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.574519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.574753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.574780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.575038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.575063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.575269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.575295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.575428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.575456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.575724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.575751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.575940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.575976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.576102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.576128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.576247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.576282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.576448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.576476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.576593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.576620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.576854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.576881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.577009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.577034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.577271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.577297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.577574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.577602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.577806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.577833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.578003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.578030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.578147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.578173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.578431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.578459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.578647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.578673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.578871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.578909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.579109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.579142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.579314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.579346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.579493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.579522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.579710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.579736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.579930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.579957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.580080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.580106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.580275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.580302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.580470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.580500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.580606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.580637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.580824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.580851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.581016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.581044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.581173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.581200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.581388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.378 [2024-10-08 18:36:23.581417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.378 qpair failed and we were unable to recover it. 00:28:30.378 [2024-10-08 18:36:23.581618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.581646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.581763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.581790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.581972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.581998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.582181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.582209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.582335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.582363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.582547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.582575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.582834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.582863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.583072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.583100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.583201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.583234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.583419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.583447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.583567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.583602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.583836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.583863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.583995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.584022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.584263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.584296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.584461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.584492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.584601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.584631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.584736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.584766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.584948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.584976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.585149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.585177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.585345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.585372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.585514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.585542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.585653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.585681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.585797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.585824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.586057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.586086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.586271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.586299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.586534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.586564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.586800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.586828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.586939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.586969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.587087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.587113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.587349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.587386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.587530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.587556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.587727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.587755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.587879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.587908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.588106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.588133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.588324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.588352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.588547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.588576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.588686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.379 [2024-10-08 18:36:23.588715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.379 qpair failed and we were unable to recover it. 00:28:30.379 [2024-10-08 18:36:23.588880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.588907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.589144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.589172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.589356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.589418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.589545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.589577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.589689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.589721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.589907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.589934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.590053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.590080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.590256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.590283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.590459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.590487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.590620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.590646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.590824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.590850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.591107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.591133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.591396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.591423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.591602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.591628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.591805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.591831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.591948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.591975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.592081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.592112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.592283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.592309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.592491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.592519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.592700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.592726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.592917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.592943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.593047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.593078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.593199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.593226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.593401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.593429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.593560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.593588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.593709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.593736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.593851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.593878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.593995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.594022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.594142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.594168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.594348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.594374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.594504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.594531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.594797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.594825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.595072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.595098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.595234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.595261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.595388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.595416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.595616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.595642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.595770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.595796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.595919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.595946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.596047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.596079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.596182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.596215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.380 [2024-10-08 18:36:23.596316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.380 [2024-10-08 18:36:23.596348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.380 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.596563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.596613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.596789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.596832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.596959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.596991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.597121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.597150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.597260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.597292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.597466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.597502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.597624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.597650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.597850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.597876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.598062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.598088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.598358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.598393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.598564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.598592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.598779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.598805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.599080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.599107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.599300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.599326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.599518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.599546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.599660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.599688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.599807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.599834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.600015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.600042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.600276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.600303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.600480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.600508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.600639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.600665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.600779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.600806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.601015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.601042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.601235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.601261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.601523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.601550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.601687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.601713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.601897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.601923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.602039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.602066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.602248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.602274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.602517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.602544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.602724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.602755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.602952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.602979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.603157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.603183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.603374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.603413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.603533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.603560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.603725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.603750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.603874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.603901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.604085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.604113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.604301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.604328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.604452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.381 [2024-10-08 18:36:23.604479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.381 qpair failed and we were unable to recover it. 00:28:30.381 [2024-10-08 18:36:23.604643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.604669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.604856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.604883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.605078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.605104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.605212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.605244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.605369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.605415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.605536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.605563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.605729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.605756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.606006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.606032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.606149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.606176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.606291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.606317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.606438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.606470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.606647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.606674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.606851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.606877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.607112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.607137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.607253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.607279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.607409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.607436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.607618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.607651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.607887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.607914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.608110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.608136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.608397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.608425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.608658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.608684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.608875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.608901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.609146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.609173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.609305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.609331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.609534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.609562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.609823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.609849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.609973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.610001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.610171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.610197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.610461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.610490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.610675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.610701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.610875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.610902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.611164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.611196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.611385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.611412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.611530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.611556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.611686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.611713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.611906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.611932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.612046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.612073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.612281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.612308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.612501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.612528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.612735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.612762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.382 [2024-10-08 18:36:23.612895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.382 [2024-10-08 18:36:23.612921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.382 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.613037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.613063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.613322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.613348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.613492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.613520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.613760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.613786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.613989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.614016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.614184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.614209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.614331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.614357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.614484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.614512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.614685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.614714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.614823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.614860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.615135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.615165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.615344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.615372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.615562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.615591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.615774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.615802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.616048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.616073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.616174] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.383 [2024-10-08 18:36:23.616205] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.383 [2024-10-08 18:36:23.616212] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.383 [2024-10-08 18:36:23.616218] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.383 [2024-10-08 18:36:23.616224] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.383 [2024-10-08 18:36:23.616260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.616300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.616479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.616514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.616696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.616724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.616960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.616987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.617166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.617192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.617358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.617414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.617599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.617626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.617795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.617821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.617884] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:28:30.383 [2024-10-08 18:36:23.617991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.618018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.617990] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:28:30.383 [2024-10-08 18:36:23.618098] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:28:30.383 [2024-10-08 18:36:23.618122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.618152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.618099] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:28:30.383 [2024-10-08 18:36:23.618352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.618393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.618662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.618690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.618824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.618851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.619053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.619081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.619265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.619291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.619460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.619488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.619674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.619700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.619892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.619918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.620106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.620132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.620315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.620341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.620550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.620578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.383 [2024-10-08 18:36:23.620696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.383 [2024-10-08 18:36:23.620723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.383 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.620917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.620943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.621055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.621084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.621318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.621344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.621500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.621544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.621668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.621708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.621926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.621959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.622253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.622287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.622482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.622516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.622790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.622823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.623058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.623092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.623269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.623302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.623497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.623531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.623721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.623753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.623961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.624007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.624156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.624188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.624428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.624462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.624598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.624631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.624817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.624850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.625046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.625078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.625206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.625240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.625443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.625478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.625663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.625695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.625931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.625965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.626137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.626170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.626443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.626479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.626611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.626644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.626868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.626902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.627118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.627153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.627293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.627326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.627475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.627508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.627619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.627650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.627916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.627970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.628165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.628208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.628407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.628440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.628611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.628646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.628776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.628807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.384 [2024-10-08 18:36:23.628996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.384 [2024-10-08 18:36:23.629028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.384 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.629135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.629166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.629296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.629328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.629451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.629484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.629746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.629778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.629909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.629941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.630192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.630224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.630394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.630427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.630600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.630639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.630759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.630791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.630981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.631013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.631185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.631217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.631335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.631367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.631493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.631525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.631656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.631687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.631866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.631898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.632087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.632119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.632249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.632281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.632466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.632500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.632689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.632720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.632901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.632933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.633209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.633242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.633512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.633546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.633721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.633754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.633885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.633917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.634086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.634119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.634289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.634321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.634528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.634562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.634740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.634772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.635040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.635072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.635262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.635294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.635514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.635546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.635643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.635675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.635860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.635892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.636136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.636168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.636361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.636422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.636554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.636588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.636777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.636810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.637003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.637036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.637219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.637252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.385 qpair failed and we were unable to recover it. 00:28:30.385 [2024-10-08 18:36:23.637495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.385 [2024-10-08 18:36:23.637529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.637765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.637798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.637969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.638002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.638130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.638163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.638383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.638417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.638602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.638635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.638762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.638794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.638987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.639020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.639217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.639259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.639445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.639479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.639667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.639700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.639814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.639846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.640027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.640060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.640178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.640210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.640343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.640388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.640553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.640587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.640709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.640742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.640955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.640988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.641092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.641124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.641296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.641329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.641513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.641548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.641736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.641769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.641949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.641984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.642153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.642187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.642399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.642435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.642612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.642647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.642831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.642863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.643053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.643086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.643324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.643361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.643636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.643673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.643792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.643825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.644000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.644032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.644232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.644265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.644456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.644491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.644618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.644651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.644894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.644949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.645150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.645190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.645317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.645360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.645555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.645588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.645876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.645909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.386 [2024-10-08 18:36:23.646098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.386 [2024-10-08 18:36:23.646130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.386 qpair failed and we were unable to recover it. 00:28:30.387 [2024-10-08 18:36:23.646268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.387 [2024-10-08 18:36:23.646299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.387 qpair failed and we were unable to recover it. 00:28:30.387 [2024-10-08 18:36:23.646474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.387 [2024-10-08 18:36:23.646507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.387 qpair failed and we were unable to recover it. 00:28:30.387 [2024-10-08 18:36:23.646696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.387 [2024-10-08 18:36:23.646728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.387 qpair failed and we were unable to recover it. 00:28:30.387 [2024-10-08 18:36:23.646832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.387 [2024-10-08 18:36:23.646864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.387 qpair failed and we were unable to recover it. 00:28:30.387 [2024-10-08 18:36:23.646984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.387 [2024-10-08 18:36:23.647015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.387 qpair failed and we were unable to recover it. 00:28:30.387 [2024-10-08 18:36:23.647204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.387 [2024-10-08 18:36:23.647236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.387 qpair failed and we were unable to recover it. 00:28:30.387 [2024-10-08 18:36:23.647495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.387 [2024-10-08 18:36:23.647529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.387 qpair failed and we were unable to recover it. 00:28:30.387 [2024-10-08 18:36:23.647654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.387 [2024-10-08 18:36:23.647692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.387 qpair failed and we were unable to recover it. 00:28:30.387 [2024-10-08 18:36:23.647886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.387 [2024-10-08 18:36:23.647918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.387 qpair failed and we were unable to recover it. 00:28:30.387 [2024-10-08 18:36:23.648101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.387 [2024-10-08 18:36:23.648133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.387 qpair failed and we were unable to recover it. 00:28:30.387 [2024-10-08 18:36:23.648308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.387 [2024-10-08 18:36:23.648340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.387 qpair failed and we were unable to recover it. 00:28:30.387 [2024-10-08 18:36:23.648530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.387 [2024-10-08 18:36:23.648563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.387 qpair failed and we were unable to recover it. 00:28:30.387 [2024-10-08 18:36:23.648803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.387 [2024-10-08 18:36:23.648835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.387 qpair failed and we were unable to recover it. 00:28:30.387 [2024-10-08 18:36:23.649098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.387 [2024-10-08 18:36:23.649130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.387 qpair failed and we were unable to recover it. 00:28:30.659 [2024-10-08 18:36:23.649307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-10-08 18:36:23.649340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-10-08 18:36:23.649538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-10-08 18:36:23.649572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-10-08 18:36:23.649792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-10-08 18:36:23.649823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-10-08 18:36:23.649927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-10-08 18:36:23.649959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-10-08 18:36:23.650184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-10-08 18:36:23.650217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-10-08 18:36:23.650411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-10-08 18:36:23.650445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-10-08 18:36:23.650640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-10-08 18:36:23.650672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-10-08 18:36:23.650882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-10-08 18:36:23.650915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-10-08 18:36:23.651039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-10-08 18:36:23.651071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-10-08 18:36:23.651204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-10-08 18:36:23.651236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-10-08 18:36:23.651495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-10-08 18:36:23.651528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-10-08 18:36:23.651651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-10-08 18:36:23.651682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-10-08 18:36:23.651893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-10-08 18:36:23.651925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-10-08 18:36:23.652164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-10-08 18:36:23.652195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-10-08 18:36:23.652445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-10-08 18:36:23.652477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-10-08 18:36:23.652676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.652708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.652889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.652921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.653041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.653073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.653244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.653276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.653479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.653512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.653653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.653694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.653820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.653848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.653969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.654000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.654186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.654212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.654388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.654416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.654650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.654677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.654843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.654869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.654991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.655018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.655129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.655159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.655332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.655358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.655640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.655667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.655927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.655953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.656134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.656160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.656416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.656443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.656584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.656610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.656731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.656759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.656995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.657021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.657128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.657158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.657393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.657421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.657540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.657566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.657848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.657874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.658050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.658076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.658262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.658288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.658449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.658488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.658596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.658632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.658833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.658859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.659095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.659122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.659325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.659357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.659630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.659657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.659838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.659865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.660108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.660135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.660359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.660398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.660653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.660679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.660873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-10-08 18:36:23.660898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-10-08 18:36:23.661013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.661040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.661225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.661253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.661372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.661408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.661589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.661616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.661816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.661843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.661963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.661989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.662171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.662198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.662464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.662492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.662677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.662704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.662891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.662918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.663124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.663151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.663332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.663359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.663598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.663625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.663809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.663835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.664098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.664124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.664292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.664320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.664453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.664489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.664723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.664749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.664985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.665012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.665190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.665216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.665395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.665423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.665671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.665699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.665921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.665947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.666180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.666207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.666399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.666428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.666692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.666720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.666980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.667006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.667170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.667196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.667391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.667421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.667546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.667572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.667751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.667777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.667899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.667927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.668046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.668072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.668250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.668276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.668475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.668509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.668701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.668728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.668943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.668969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.669283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.669310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.669523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.669550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-10-08 18:36:23.669675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-10-08 18:36:23.669703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.669884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.669911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.670079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.670107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.670365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.670405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.670587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.670614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.670850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.670877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.671080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.671107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.671307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.671333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.671512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.671540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.671816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.671845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.672080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.672107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.672236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.672263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.672400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.672430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.672604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.672631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.672751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.672777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.672984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.673010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.673195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.673221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.673398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.673426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.673613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.673640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.673867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.673893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.674066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.674092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.674268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.674294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.674468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.674502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.674684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.674710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.674880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.674907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.675097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.675123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.675244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.675270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.675433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.675460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.675663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.675690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.675836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.675863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.675983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.676009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.676226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.676252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.676523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.676550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.676787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.676814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.676935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.676962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.677181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.677207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.677456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.677484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.677601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.677627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.677762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.677788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.678025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-10-08 18:36:23.678051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-10-08 18:36:23.678234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.678259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.678424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.678452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.678707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.678733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.679012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.679039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.679211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.679238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.679401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.679428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.679537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.679567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.679694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.679721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.679897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.679923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.680181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.680208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.680464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.680492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.680680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.680706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.680894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.680921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.681104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.681130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.681307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.681332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.681527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.681555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.681789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.681815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.681992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.682018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.682145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.682171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.682277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.682307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.682492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.682519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.682708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.682734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.682913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.682940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.683124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.683156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.683274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.683300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.683540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.683568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.683744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.683770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.683954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.683981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.684148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.684175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.684338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.684365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.684563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.684591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.684823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.684849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.685017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.685043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.685158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.685184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.685298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.685324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.685512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.685538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.685672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.685698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.685891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-10-08 18:36:23.685916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-10-08 18:36:23.686026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.686053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.686227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.686253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.686449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.686476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.686722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.686749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.686919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.686945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.687126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.687153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.687265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.687297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.687565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.687592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.687828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.687854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.688046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.688072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.688279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.688305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.688439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.688467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.688570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.688609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.688852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.688879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.689118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.689144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.689329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.689355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.689668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.689695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.689933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.689958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.690076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.690103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.690282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.690313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.690445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.690475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.690709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.690738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.690917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.690946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.691177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.691206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.691394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.691423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.691542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.691566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.691768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.691811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.691990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.692020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.692227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.692257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.692435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.692466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.692664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.692693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.692883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.692923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.693121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.693155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.693427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.693462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.693656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.693686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-10-08 18:36:23.693868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-10-08 18:36:23.693912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.694036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.694066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.694247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.694277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.694518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.694554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.694735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.694782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.694972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.695007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.695210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.695243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.695485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.695521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.695709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.695741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.695990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.696025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.696320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.696352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.696481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.696516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.696698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.696729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.696897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.696929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.697104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.697144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.697433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.697467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.697663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.697699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.697916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.697959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.698156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.698189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.698428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.698467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.698669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.698702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.698805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.698843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.698981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.699013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.699219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.699262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.699494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.699550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.699739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.699772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.699918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.699951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.700138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.700170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.700300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.700332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.700458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.700491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.700676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.700708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.700955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.701027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.701243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.701283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.701414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.701443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.701637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.701664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.701833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.701860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.702039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.702065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.702231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.702258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.702372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-10-08 18:36:23.702444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-10-08 18:36:23.702667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.702694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.702864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.702890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.702997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.703026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.703257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.703283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.703546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.703574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.703704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.703730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.703925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.703951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.704172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.704198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.704423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.704451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.704567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.704593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.704831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.704856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.705116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.705143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.705407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.705434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.705560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.705587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.705764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.705790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.705994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.706019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.706201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.706228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.706411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.706438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.706712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.706738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.706907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.706942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.707125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.707152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.707327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.707352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.707558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.707585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.707759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.707785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.708056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.708082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.708207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.708233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.708416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.708443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.708618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.708644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.708828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.708854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.709120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.709146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.709270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.709296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.709503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.709531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.709795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.709821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.709977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.710003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.710166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.710191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.710407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.710434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.710601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.710627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.710805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.710831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.710953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.710980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.711157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-10-08 18:36:23.711183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-10-08 18:36:23.711352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.711384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.711502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.711528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.711665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.711691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.711855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.711881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.712050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.712076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.712205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.712232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.712443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.712475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.712712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.712738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.712840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.712872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.713051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.713077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.713190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.713221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.713481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.713507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.713640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.713667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.713900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.713926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.714191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.714216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.714418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.714445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.714636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.714662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.714782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.714808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.714997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.715024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.715150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.715177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.715412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.715460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.715597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.715633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.715807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.715839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.716038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.716070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.716273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.716305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.716479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.716512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.716694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.716725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.716961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.716993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.717117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.717148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.717407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.717440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.717557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.717589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.717789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.717822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.717987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.718020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.718208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.718247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.718463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.718497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.718682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.718714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.718927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.718959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.719090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.719122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.719402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.719434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-10-08 18:36:23.719645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-10-08 18:36:23.719677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.719874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.719906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.720186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.720219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.720482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.720515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.720686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.720718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.720935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.720967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.721146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.721178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.721403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.721436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.721624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.721657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.721842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.721874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.721991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.722023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.722206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.722238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.722390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.722423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.722594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.722626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.722872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.722904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.723159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.723190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.723374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.723428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.723744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.723778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.723958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.723989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.724238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.724271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.724400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.724434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.724651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.724696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.724967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.725000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.725195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.725236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.725519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.725555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.725838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.725883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.726132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.726174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.726437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.726474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.726671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.726713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.726969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.727014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.727228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.727271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.727545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.727583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.727849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.727883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.728171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.728206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.728412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.728464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.728585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-10-08 18:36:23.728624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-10-08 18:36:23.728892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.728929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.729206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.729253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.729468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.729508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.729714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.729753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.729937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.729971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.730195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.730232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.730481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.730516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.730758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.730790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.731053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.731085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.731321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.731351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.731564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.731597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.731843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.731875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.732067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.732100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.732288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.732321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.732574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.732607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.732866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.732898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.733098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.733130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.733247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.733280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.733467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.733500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.733713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.733745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.733984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.734016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.734197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.734229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.734366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.734409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:30.669 [2024-10-08 18:36:23.734673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.734705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.734821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:30.669 [2024-10-08 18:36:23.734853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.735040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.735072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:30.669 [2024-10-08 18:36:23.735257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.735291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.735415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.735448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:30.669 [2024-10-08 18:36:23.735661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.735693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.669 [2024-10-08 18:36:23.735825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.735860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.735985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.736017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.736216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.736248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.736423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.736456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.736644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.736675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.736841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.736873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.737101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.737134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-10-08 18:36:23.737247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-10-08 18:36:23.737285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.737543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.737577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.737795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.737827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.738012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.738046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.738285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.738317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.738512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.738544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.738717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.738748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.738976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.739008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.739216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.739249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.739452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.739487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.739605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.739639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.739771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.739803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.739927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.739959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.740202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.740235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.740472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.740505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.740751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.740783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.740958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.740991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.741171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.741203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.741442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.741476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.741741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.741773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.741909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.741940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.742129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.742162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.742394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.742428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.742683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.742718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.742905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.742937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.743063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.743096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.743370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.743412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.743630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.743674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.743944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.743981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.744115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.744161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.744357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.744418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.744626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.744670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.744891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.744932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.745053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.745088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.745284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.745316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.745532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.745564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.745676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.745708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.745901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.745933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.746128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.746161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.670 qpair failed and we were unable to recover it. 00:28:30.670 [2024-10-08 18:36:23.746280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.670 [2024-10-08 18:36:23.746312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.746449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.746493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.746635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.746667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.746932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.746965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.747172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.747204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.747362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.747410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.747534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.747567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.747745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.747778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.747897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.747929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.748104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.748137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.748251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.748284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.748408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.748442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.748558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.748589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.748700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.748732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.748909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.748941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.749064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.749096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.749228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.749259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.749369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.749411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.749595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.749626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.749733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.749765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.749867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.749898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.750070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.750102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.750288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.750319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.750476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.750510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.750620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.750651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.750748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.750780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.750985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.751018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.751145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.751177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.751387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.751440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.751592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.751627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.751803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.751848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.752045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.752078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.752214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.752255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.752470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.752509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.752706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.752740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.752878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.752913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.753088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.753127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.753301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.753333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.753464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.753497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.671 [2024-10-08 18:36:23.753680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.671 [2024-10-08 18:36:23.753714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.671 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.753887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.753919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.754097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.754139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.754294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.754326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.754529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.754564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.754763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.754796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.754911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.754943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.755122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.755156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.755348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.755391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.755501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.755534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.755712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.755745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.755865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.755897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.756025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.756058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.756229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.756260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.756443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.756478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.756650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.756682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.756862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.756895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.757070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.757103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.757342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.757373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.757505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.757537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.757670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.757702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.757938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.757971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.758145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.758177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.758367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.758410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.758533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.758566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.758771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.758805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.758935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.758967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.759234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.759266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.759459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.759493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1864000b90 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.759633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.759675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.759871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.759899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.760024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.760052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.760162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.760191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.760360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.760405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.760581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.760607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.760804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.760830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.672 qpair failed and we were unable to recover it. 00:28:30.672 [2024-10-08 18:36:23.760961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.672 [2024-10-08 18:36:23.760987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.761152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.761178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.761353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.761390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.761499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.761531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.761650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.761680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.761795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.761824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.762004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.762030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.762222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.762248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.762416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.762443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.762603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.762631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.762754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.762781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.762900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.762926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.763162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.763189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.763358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.763396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.763512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.763538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.763647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.763676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.763852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.763879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.764063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.764089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.764335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.764361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.764485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.764518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.764692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.764724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.764832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.764862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.764985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.765012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.765275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.765300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.765483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.765510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.765703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.765729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.765855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.765881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.765998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.766025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.766151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.766177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.766359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.766398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.766510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.766539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.766783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.766809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.766936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.766962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.767068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.767099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.767218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.767245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.767365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.767406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.767598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.767625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.767741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.767767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.767884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.767910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.768095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.673 [2024-10-08 18:36:23.768122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.673 qpair failed and we were unable to recover it. 00:28:30.673 [2024-10-08 18:36:23.768325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.768351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.768468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.768499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.768622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.768649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.768764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.768794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.768915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.768942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.769065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.769091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.769198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.769227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.769324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.769355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.769484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.769511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.769612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.769646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.769778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.769805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.770061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.770087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.770261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.770287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.770407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.770437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.770612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.770638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.770824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.770850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.770962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.770990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.771161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.771185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.771293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.771318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.771453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.771478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.771594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.771618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01c60 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.771823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.771860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.674 [2024-10-08 18:36:23.771973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.772006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.772181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.772213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:30.674 [2024-10-08 18:36:23.772449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.772484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.674 [2024-10-08 18:36:23.772674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.772708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.772829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.772861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.674 [2024-10-08 18:36:23.772973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.773007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.773202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.773234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.773354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.773395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.773522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.773554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.773674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.773706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.773810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.773842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.773968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.774000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.774190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.774222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.774395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.774427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.774537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.774570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.774677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.674 [2024-10-08 18:36:23.774709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.674 qpair failed and we were unable to recover it. 00:28:30.674 [2024-10-08 18:36:23.774923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.774955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.775074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.775106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.775213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.775244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.775358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.775399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.775573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.775605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.775780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.775811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.775915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.775947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.776052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.776084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.776190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.776223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.776403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.776436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.776543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.776575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.776750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.776781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.776906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.776937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.777064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.777096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.777200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.777231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.777428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.777461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.777650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.777682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.777806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.777837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.777956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.777987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.778089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.778120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.778225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.778257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.778363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.778411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.778651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.778682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.778791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.778822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.778947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.778979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.779088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.779120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.779299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.779330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.779447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.779480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.779600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.779632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.779819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.779850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.780123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.780154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.780335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.780367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.780493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.780525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.780628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.780659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.780843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.780876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.781064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.781096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.781265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.781296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.781476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.781509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.675 qpair failed and we were unable to recover it. 00:28:30.675 [2024-10-08 18:36:23.781613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.675 [2024-10-08 18:36:23.781644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.781848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.781880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.782064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.782095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.782213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.782244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.782351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.782406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.782592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.782624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.782792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.782824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.782996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.783028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.783209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.783241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.783417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.783450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.783575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.783607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.783800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.783832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.784014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.784047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.784170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.784203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.784389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.784422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.784595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.784629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.784817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.784849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.784956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.784989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.785097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.785130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.785231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.785263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.785432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.785465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.785708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.785741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.785858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.785891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.786075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.786113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.786299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.786331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.786474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.786508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.786683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.786716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.786892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.786925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.787044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.787076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.787254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.787287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.787423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.787459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.787653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.787688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.787794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.787827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.787938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.787971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.788165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.788197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.788318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.788352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.788549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.788584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.788708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.788740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.788863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.788895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.789010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.676 [2024-10-08 18:36:23.789042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.676 qpair failed and we were unable to recover it. 00:28:30.676 [2024-10-08 18:36:23.789230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.789261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.789501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.789535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.789645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.789676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.789857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.789890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.790013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.790045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.790145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.790177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.790442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.790476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.790658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.790690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.790941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.790994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.791200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.791238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 Malloc0 00:28:30.677 [2024-10-08 18:36:23.791391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.791426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.791606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.791637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.791898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.791930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.677 [2024-10-08 18:36:23.792044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.792077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.792206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.792238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.792434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:30.677 [2024-10-08 18:36:23.792467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.792600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.792632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.792743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.792775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.792946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.792978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.677 [2024-10-08 18:36:23.793176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.793208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.793338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.793402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.793552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.793605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.793905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.793960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.794165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.794213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1858000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.794435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.794488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.794674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.794709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.794834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.794866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.795037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.795070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.795184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.795216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.795410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.795444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.795615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.795648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.795767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.795799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.795981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.796012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.796245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.796276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.677 qpair failed and we were unable to recover it. 00:28:30.677 [2024-10-08 18:36:23.796404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.677 [2024-10-08 18:36:23.796438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.796666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.796698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.796890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.796922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.797091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.797123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.797293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.797324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.797458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.797491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.797674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.797706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.797824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.797855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.798111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.798143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.798400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.798433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.798540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.798572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.798741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.798772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.798790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.678 [2024-10-08 18:36:23.798945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.798976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.799168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.799200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.799374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.799418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.799626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.799658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.799776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.799807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.800004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.800035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.800206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.800237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.800424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.800457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.800631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.800663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.800860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.800891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.801020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.801052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.801301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.801332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.801527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.801559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.801672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.801704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.801839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.801871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.802070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.802107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.802302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.802333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.802588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.802621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.802751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.802782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.802973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.803004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.803240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.803272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.803524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.803556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.803659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.803691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.803930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.803962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.804177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.804209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.804402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.804435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.804604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.678 [2024-10-08 18:36:23.804635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.678 qpair failed and we were unable to recover it. 00:28:30.678 [2024-10-08 18:36:23.804808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.804840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.804970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.805001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.805272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.805304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.805428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.805461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.805647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.805679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.805850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.805882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.806059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.806091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.806274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.806305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.806467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.806500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.806767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.806798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.806925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.806957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.807131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.807163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.807332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.807363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.679 [2024-10-08 18:36:23.807555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.807588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.807779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.807816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:30.679 [2024-10-08 18:36:23.808076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.808107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.679 [2024-10-08 18:36:23.808214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.808246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.808422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.808454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.679 [2024-10-08 18:36:23.808623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.808655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.808846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.808878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.809089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.809120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.809231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.809262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.809393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.809426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.809667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.809699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.809873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.809904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.810112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.810144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.810356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.810419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.810557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.810589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.810720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.810752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.810972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.811003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.811182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.811214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.811449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.811482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.811761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.811792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.811910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.811942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.812146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.812178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.812418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.812451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.812637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.812668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.812840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.679 [2024-10-08 18:36:23.812872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.679 qpair failed and we were unable to recover it. 00:28:30.679 [2024-10-08 18:36:23.813138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.813169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.813426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.813459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.813595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.813627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.813858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.813890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.814076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.814108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.814293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.814324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.814504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.814537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.814708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.814739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.814978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.815009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.815216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.815248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.815502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.815535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.815786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.815817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:30.680 [2024-10-08 18:36:23.815965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.815998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.680 [2024-10-08 18:36:23.816260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.816292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.816474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.816507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.680 [2024-10-08 18:36:23.816778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.816810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.816914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.816948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.817128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.817160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.817354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.817393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.817512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.817544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.817657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.817689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.817812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.817843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.818085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.818117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.818395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.818427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.818662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.818694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.818941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.818973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.819101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.819132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.819400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.819433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.819554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.819586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.819769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.819800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.820049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.820080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.820253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.820285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.820547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.820584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.820699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.820731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.820923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.820955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.821197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.821228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.821429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.821462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.680 [2024-10-08 18:36:23.821667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.680 [2024-10-08 18:36:23.821699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.680 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.821831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.821862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.821991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.822023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.822272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.822303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.822484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.822517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.822711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.822743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.822878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.822909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.823150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.823182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.823439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.823472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.681 [2024-10-08 18:36:23.823606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.823639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.823774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.823806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.681 [2024-10-08 18:36:23.824001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.824033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.824203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.824236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.681 [2024-10-08 18:36:23.824407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.824439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.681 [2024-10-08 18:36:23.824674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.824712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.824899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.824931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.825100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.825131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.825323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.825354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.825627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.825659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.825944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.825976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.826116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.826148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.826281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.826313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.826459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.826491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.826665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.826696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.826948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.681 [2024-10-08 18:36:23.826980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f185c000b90 with addr=10.0.0.2, port=4420 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 [2024-10-08 18:36:23.827036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.681 [2024-10-08 18:36:23.829458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.681 [2024-10-08 18:36:23.829572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.681 [2024-10-08 18:36:23.829619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.681 [2024-10-08 18:36:23.829643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.681 [2024-10-08 18:36:23.829664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.681 [2024-10-08 18:36:23.829724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.681 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:30.681 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.681 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.681 [2024-10-08 18:36:23.839388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.681 [2024-10-08 18:36:23.839504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.681 [2024-10-08 18:36:23.839549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.681 [2024-10-08 18:36:23.839572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.681 [2024-10-08 18:36:23.839592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.681 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.681 [2024-10-08 18:36:23.839638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.681 qpair failed and we were unable to recover it. 00:28:30.681 18:36:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 583708 00:28:30.681 [2024-10-08 18:36:23.849363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.681 [2024-10-08 18:36:23.849447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.682 [2024-10-08 18:36:23.849475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.682 [2024-10-08 18:36:23.849489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.682 [2024-10-08 18:36:23.849503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.682 [2024-10-08 18:36:23.849532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.682 qpair failed and we were unable to recover it. 00:28:30.682 [2024-10-08 18:36:23.859368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.682 [2024-10-08 18:36:23.859442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.682 [2024-10-08 18:36:23.859461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.682 [2024-10-08 18:36:23.859470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.682 [2024-10-08 18:36:23.859479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.682 [2024-10-08 18:36:23.859499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.682 qpair failed and we were unable to recover it. 00:28:30.682 [2024-10-08 18:36:23.869337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.682 [2024-10-08 18:36:23.869411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.682 [2024-10-08 18:36:23.869428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.682 [2024-10-08 18:36:23.869435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.682 [2024-10-08 18:36:23.869441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.682 [2024-10-08 18:36:23.869456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.682 qpair failed and we were unable to recover it. 00:28:30.682 [2024-10-08 18:36:23.879382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.682 [2024-10-08 18:36:23.879470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.682 [2024-10-08 18:36:23.879497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.682 [2024-10-08 18:36:23.879505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.682 [2024-10-08 18:36:23.879511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.682 [2024-10-08 18:36:23.879532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.682 qpair failed and we were unable to recover it. 00:28:30.682 [2024-10-08 18:36:23.889391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.682 [2024-10-08 18:36:23.889447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.682 [2024-10-08 18:36:23.889462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.682 [2024-10-08 18:36:23.889469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.682 [2024-10-08 18:36:23.889475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.682 [2024-10-08 18:36:23.889491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.682 qpair failed and we were unable to recover it. 00:28:30.682 [2024-10-08 18:36:23.899418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.682 [2024-10-08 18:36:23.899473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.682 [2024-10-08 18:36:23.899488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.682 [2024-10-08 18:36:23.899495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.682 [2024-10-08 18:36:23.899500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.682 [2024-10-08 18:36:23.899515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.682 qpair failed and we were unable to recover it. 00:28:30.682 [2024-10-08 18:36:23.909466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.682 [2024-10-08 18:36:23.909521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.682 [2024-10-08 18:36:23.909534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.682 [2024-10-08 18:36:23.909541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.682 [2024-10-08 18:36:23.909547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.682 [2024-10-08 18:36:23.909567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.682 qpair failed and we were unable to recover it. 00:28:30.682 [2024-10-08 18:36:23.919440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.682 [2024-10-08 18:36:23.919511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.682 [2024-10-08 18:36:23.919524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.682 [2024-10-08 18:36:23.919531] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.682 [2024-10-08 18:36:23.919537] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.682 [2024-10-08 18:36:23.919552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.682 qpair failed and we were unable to recover it. 00:28:30.682 [2024-10-08 18:36:23.929533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.682 [2024-10-08 18:36:23.929592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.682 [2024-10-08 18:36:23.929605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.682 [2024-10-08 18:36:23.929612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.682 [2024-10-08 18:36:23.929618] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.682 [2024-10-08 18:36:23.929632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.682 qpair failed and we were unable to recover it. 00:28:30.682 [2024-10-08 18:36:23.939549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.682 [2024-10-08 18:36:23.939624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.682 [2024-10-08 18:36:23.939642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.682 [2024-10-08 18:36:23.939649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.682 [2024-10-08 18:36:23.939656] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.682 [2024-10-08 18:36:23.939671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.682 qpair failed and we were unable to recover it. 00:28:30.682 [2024-10-08 18:36:23.949543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.682 [2024-10-08 18:36:23.949602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.682 [2024-10-08 18:36:23.949617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.682 [2024-10-08 18:36:23.949624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.682 [2024-10-08 18:36:23.949633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.682 [2024-10-08 18:36:23.949651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.682 qpair failed and we were unable to recover it. 00:28:30.682 [2024-10-08 18:36:23.959571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.682 [2024-10-08 18:36:23.959633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.682 [2024-10-08 18:36:23.959654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.682 [2024-10-08 18:36:23.959661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.682 [2024-10-08 18:36:23.959667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.682 [2024-10-08 18:36:23.959683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.682 qpair failed and we were unable to recover it. 00:28:30.943 [2024-10-08 18:36:23.969704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.943 [2024-10-08 18:36:23.969763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.943 [2024-10-08 18:36:23.969779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.943 [2024-10-08 18:36:23.969786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.943 [2024-10-08 18:36:23.969792] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.943 [2024-10-08 18:36:23.969807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.943 qpair failed and we were unable to recover it. 00:28:30.943 [2024-10-08 18:36:23.979606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.943 [2024-10-08 18:36:23.979669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.943 [2024-10-08 18:36:23.979685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.943 [2024-10-08 18:36:23.979693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.943 [2024-10-08 18:36:23.979699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.943 [2024-10-08 18:36:23.979714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.943 qpair failed and we were unable to recover it. 00:28:30.943 [2024-10-08 18:36:23.989666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.943 [2024-10-08 18:36:23.989726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.943 [2024-10-08 18:36:23.989741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.943 [2024-10-08 18:36:23.989748] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.943 [2024-10-08 18:36:23.989755] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.943 [2024-10-08 18:36:23.989769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.943 qpair failed and we were unable to recover it. 00:28:30.943 [2024-10-08 18:36:23.999724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.943 [2024-10-08 18:36:23.999778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.943 [2024-10-08 18:36:23.999792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.943 [2024-10-08 18:36:23.999799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.943 [2024-10-08 18:36:23.999809] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.943 [2024-10-08 18:36:23.999824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.943 qpair failed and we were unable to recover it. 00:28:30.943 [2024-10-08 18:36:24.009708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.943 [2024-10-08 18:36:24.009785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.943 [2024-10-08 18:36:24.009799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.943 [2024-10-08 18:36:24.009806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.943 [2024-10-08 18:36:24.009812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.943 [2024-10-08 18:36:24.009826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.943 qpair failed and we were unable to recover it. 00:28:30.943 [2024-10-08 18:36:24.019757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.943 [2024-10-08 18:36:24.019814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.943 [2024-10-08 18:36:24.019828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.943 [2024-10-08 18:36:24.019835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.943 [2024-10-08 18:36:24.019841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.943 [2024-10-08 18:36:24.019855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.943 qpair failed and we were unable to recover it. 00:28:30.943 [2024-10-08 18:36:24.029776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.943 [2024-10-08 18:36:24.029830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.943 [2024-10-08 18:36:24.029844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.943 [2024-10-08 18:36:24.029850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.943 [2024-10-08 18:36:24.029856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.943 [2024-10-08 18:36:24.029870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.943 qpair failed and we were unable to recover it. 00:28:30.943 [2024-10-08 18:36:24.039811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.943 [2024-10-08 18:36:24.039883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.943 [2024-10-08 18:36:24.039898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.943 [2024-10-08 18:36:24.039905] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.943 [2024-10-08 18:36:24.039911] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.943 [2024-10-08 18:36:24.039925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.943 qpair failed and we were unable to recover it. 00:28:30.943 [2024-10-08 18:36:24.049773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.943 [2024-10-08 18:36:24.049834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.943 [2024-10-08 18:36:24.049848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.943 [2024-10-08 18:36:24.049855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.943 [2024-10-08 18:36:24.049861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.943 [2024-10-08 18:36:24.049876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.943 qpair failed and we were unable to recover it. 00:28:30.943 [2024-10-08 18:36:24.059818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.943 [2024-10-08 18:36:24.059874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.943 [2024-10-08 18:36:24.059888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.943 [2024-10-08 18:36:24.059894] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.943 [2024-10-08 18:36:24.059900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.943 [2024-10-08 18:36:24.059914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.943 qpair failed and we were unable to recover it. 00:28:30.943 [2024-10-08 18:36:24.069926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.943 [2024-10-08 18:36:24.069990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.943 [2024-10-08 18:36:24.070003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.943 [2024-10-08 18:36:24.070009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.944 [2024-10-08 18:36:24.070015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.944 [2024-10-08 18:36:24.070030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.944 qpair failed and we were unable to recover it. 00:28:30.944 [2024-10-08 18:36:24.079914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.944 [2024-10-08 18:36:24.079970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.944 [2024-10-08 18:36:24.079983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.944 [2024-10-08 18:36:24.079990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.944 [2024-10-08 18:36:24.079996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.944 [2024-10-08 18:36:24.080010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.944 qpair failed and we were unable to recover it. 00:28:30.944 [2024-10-08 18:36:24.089871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.944 [2024-10-08 18:36:24.089923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.944 [2024-10-08 18:36:24.089937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.944 [2024-10-08 18:36:24.089944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.944 [2024-10-08 18:36:24.089952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.944 [2024-10-08 18:36:24.089967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.944 qpair failed and we were unable to recover it. 00:28:30.944 [2024-10-08 18:36:24.100009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.944 [2024-10-08 18:36:24.100065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.944 [2024-10-08 18:36:24.100079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.944 [2024-10-08 18:36:24.100086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.944 [2024-10-08 18:36:24.100092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.944 [2024-10-08 18:36:24.100107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.944 qpair failed and we were unable to recover it. 00:28:30.944 [2024-10-08 18:36:24.110027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.944 [2024-10-08 18:36:24.110086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.944 [2024-10-08 18:36:24.110099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.944 [2024-10-08 18:36:24.110106] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.944 [2024-10-08 18:36:24.110112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.944 [2024-10-08 18:36:24.110126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.944 qpair failed and we were unable to recover it. 00:28:30.944 [2024-10-08 18:36:24.120032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.944 [2024-10-08 18:36:24.120079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.944 [2024-10-08 18:36:24.120092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.944 [2024-10-08 18:36:24.120099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.944 [2024-10-08 18:36:24.120105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.944 [2024-10-08 18:36:24.120119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.944 qpair failed and we were unable to recover it. 00:28:30.944 [2024-10-08 18:36:24.130026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.944 [2024-10-08 18:36:24.130127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.944 [2024-10-08 18:36:24.130142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.944 [2024-10-08 18:36:24.130149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.944 [2024-10-08 18:36:24.130155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.944 [2024-10-08 18:36:24.130170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.944 qpair failed and we were unable to recover it. 00:28:30.944 [2024-10-08 18:36:24.140016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.944 [2024-10-08 18:36:24.140068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.944 [2024-10-08 18:36:24.140083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.944 [2024-10-08 18:36:24.140089] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.944 [2024-10-08 18:36:24.140095] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.944 [2024-10-08 18:36:24.140109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.944 qpair failed and we were unable to recover it. 00:28:30.944 [2024-10-08 18:36:24.150115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.944 [2024-10-08 18:36:24.150201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.944 [2024-10-08 18:36:24.150215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.944 [2024-10-08 18:36:24.150221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.944 [2024-10-08 18:36:24.150227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.944 [2024-10-08 18:36:24.150241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.944 qpair failed and we were unable to recover it. 00:28:30.944 [2024-10-08 18:36:24.160072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.944 [2024-10-08 18:36:24.160127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.944 [2024-10-08 18:36:24.160140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.944 [2024-10-08 18:36:24.160147] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.944 [2024-10-08 18:36:24.160153] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.944 [2024-10-08 18:36:24.160167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.944 qpair failed and we were unable to recover it. 00:28:30.944 [2024-10-08 18:36:24.170152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.944 [2024-10-08 18:36:24.170246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.944 [2024-10-08 18:36:24.170261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.944 [2024-10-08 18:36:24.170268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.944 [2024-10-08 18:36:24.170274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.944 [2024-10-08 18:36:24.170289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.944 qpair failed and we were unable to recover it. 00:28:30.944 [2024-10-08 18:36:24.180198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.944 [2024-10-08 18:36:24.180259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.944 [2024-10-08 18:36:24.180272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.944 [2024-10-08 18:36:24.180282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.944 [2024-10-08 18:36:24.180288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.944 [2024-10-08 18:36:24.180303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.944 qpair failed and we were unable to recover it. 00:28:30.944 [2024-10-08 18:36:24.190220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.944 [2024-10-08 18:36:24.190271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.944 [2024-10-08 18:36:24.190284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.944 [2024-10-08 18:36:24.190291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.944 [2024-10-08 18:36:24.190297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.944 [2024-10-08 18:36:24.190311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.944 qpair failed and we were unable to recover it. 00:28:30.944 [2024-10-08 18:36:24.200256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.944 [2024-10-08 18:36:24.200307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.944 [2024-10-08 18:36:24.200320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.944 [2024-10-08 18:36:24.200327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.944 [2024-10-08 18:36:24.200333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.944 [2024-10-08 18:36:24.200347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.944 qpair failed and we were unable to recover it. 00:28:30.944 [2024-10-08 18:36:24.210308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.944 [2024-10-08 18:36:24.210363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.944 [2024-10-08 18:36:24.210385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.945 [2024-10-08 18:36:24.210393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.945 [2024-10-08 18:36:24.210400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.945 [2024-10-08 18:36:24.210416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.945 qpair failed and we were unable to recover it. 00:28:30.945 [2024-10-08 18:36:24.220315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.945 [2024-10-08 18:36:24.220423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.945 [2024-10-08 18:36:24.220439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.945 [2024-10-08 18:36:24.220447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.945 [2024-10-08 18:36:24.220453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.945 [2024-10-08 18:36:24.220468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.945 qpair failed and we were unable to recover it. 00:28:30.945 [2024-10-08 18:36:24.230332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.945 [2024-10-08 18:36:24.230389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.945 [2024-10-08 18:36:24.230403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.945 [2024-10-08 18:36:24.230410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.945 [2024-10-08 18:36:24.230416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.945 [2024-10-08 18:36:24.230431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.945 qpair failed and we were unable to recover it. 00:28:30.945 [2024-10-08 18:36:24.240391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.945 [2024-10-08 18:36:24.240453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.945 [2024-10-08 18:36:24.240467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.945 [2024-10-08 18:36:24.240475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.945 [2024-10-08 18:36:24.240481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.945 [2024-10-08 18:36:24.240496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.945 qpair failed and we were unable to recover it. 00:28:30.945 [2024-10-08 18:36:24.250325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.945 [2024-10-08 18:36:24.250383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.945 [2024-10-08 18:36:24.250397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.945 [2024-10-08 18:36:24.250403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.945 [2024-10-08 18:36:24.250409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.945 [2024-10-08 18:36:24.250424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.945 qpair failed and we were unable to recover it. 00:28:30.945 [2024-10-08 18:36:24.260430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.945 [2024-10-08 18:36:24.260500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.945 [2024-10-08 18:36:24.260514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.945 [2024-10-08 18:36:24.260521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.945 [2024-10-08 18:36:24.260527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:30.945 [2024-10-08 18:36:24.260542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.945 qpair failed and we were unable to recover it. 00:28:31.205 [2024-10-08 18:36:24.270471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.205 [2024-10-08 18:36:24.270524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.205 [2024-10-08 18:36:24.270539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.205 [2024-10-08 18:36:24.270549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.205 [2024-10-08 18:36:24.270554] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.205 [2024-10-08 18:36:24.270569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.205 qpair failed and we were unable to recover it. 00:28:31.205 [2024-10-08 18:36:24.280504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.205 [2024-10-08 18:36:24.280574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.205 [2024-10-08 18:36:24.280588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.205 [2024-10-08 18:36:24.280594] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.205 [2024-10-08 18:36:24.280601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.205 [2024-10-08 18:36:24.280615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.205 qpair failed and we were unable to recover it. 00:28:31.205 [2024-10-08 18:36:24.290496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.205 [2024-10-08 18:36:24.290550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.205 [2024-10-08 18:36:24.290565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.205 [2024-10-08 18:36:24.290572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.205 [2024-10-08 18:36:24.290578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.205 [2024-10-08 18:36:24.290593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.205 qpair failed and we were unable to recover it. 00:28:31.206 [2024-10-08 18:36:24.300551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.206 [2024-10-08 18:36:24.300606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.206 [2024-10-08 18:36:24.300620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.206 [2024-10-08 18:36:24.300627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.206 [2024-10-08 18:36:24.300633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.206 [2024-10-08 18:36:24.300647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.206 qpair failed and we were unable to recover it. 00:28:31.206 [2024-10-08 18:36:24.310585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.206 [2024-10-08 18:36:24.310655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.206 [2024-10-08 18:36:24.310668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.206 [2024-10-08 18:36:24.310675] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.206 [2024-10-08 18:36:24.310681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.206 [2024-10-08 18:36:24.310695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.206 qpair failed and we were unable to recover it. 00:28:31.206 [2024-10-08 18:36:24.320590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.206 [2024-10-08 18:36:24.320642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.206 [2024-10-08 18:36:24.320656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.206 [2024-10-08 18:36:24.320662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.206 [2024-10-08 18:36:24.320668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.206 [2024-10-08 18:36:24.320682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.206 qpair failed and we were unable to recover it. 00:28:31.206 [2024-10-08 18:36:24.330601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.206 [2024-10-08 18:36:24.330653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.206 [2024-10-08 18:36:24.330666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.206 [2024-10-08 18:36:24.330673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.206 [2024-10-08 18:36:24.330679] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.206 [2024-10-08 18:36:24.330693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.206 qpair failed and we were unable to recover it. 00:28:31.206 [2024-10-08 18:36:24.340699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.206 [2024-10-08 18:36:24.340752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.206 [2024-10-08 18:36:24.340766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.206 [2024-10-08 18:36:24.340774] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.206 [2024-10-08 18:36:24.340779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.206 [2024-10-08 18:36:24.340794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.206 qpair failed and we were unable to recover it. 00:28:31.206 [2024-10-08 18:36:24.350686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.206 [2024-10-08 18:36:24.350744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.206 [2024-10-08 18:36:24.350758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.206 [2024-10-08 18:36:24.350766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.206 [2024-10-08 18:36:24.350772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.206 [2024-10-08 18:36:24.350788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.206 qpair failed and we were unable to recover it. 00:28:31.206 [2024-10-08 18:36:24.360740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.206 [2024-10-08 18:36:24.360807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.206 [2024-10-08 18:36:24.360827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.206 [2024-10-08 18:36:24.360834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.206 [2024-10-08 18:36:24.360841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.206 [2024-10-08 18:36:24.360856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.206 qpair failed and we were unable to recover it. 00:28:31.206 [2024-10-08 18:36:24.370645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.206 [2024-10-08 18:36:24.370704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.206 [2024-10-08 18:36:24.370717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.206 [2024-10-08 18:36:24.370724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.206 [2024-10-08 18:36:24.370730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.206 [2024-10-08 18:36:24.370745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.206 qpair failed and we were unable to recover it. 00:28:31.206 [2024-10-08 18:36:24.380755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.206 [2024-10-08 18:36:24.380810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.206 [2024-10-08 18:36:24.380823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.206 [2024-10-08 18:36:24.380830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.206 [2024-10-08 18:36:24.380836] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.206 [2024-10-08 18:36:24.380851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.206 qpair failed and we were unable to recover it. 00:28:31.206 [2024-10-08 18:36:24.390776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.206 [2024-10-08 18:36:24.390829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.206 [2024-10-08 18:36:24.390843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.206 [2024-10-08 18:36:24.390850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.206 [2024-10-08 18:36:24.390856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.206 [2024-10-08 18:36:24.390871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.206 qpair failed and we were unable to recover it. 00:28:31.206 [2024-10-08 18:36:24.400762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.206 [2024-10-08 18:36:24.400845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.206 [2024-10-08 18:36:24.400858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.206 [2024-10-08 18:36:24.400864] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.206 [2024-10-08 18:36:24.400870] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.206 [2024-10-08 18:36:24.400888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.206 qpair failed and we were unable to recover it. 00:28:31.206 [2024-10-08 18:36:24.410866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.206 [2024-10-08 18:36:24.410934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.206 [2024-10-08 18:36:24.410947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.206 [2024-10-08 18:36:24.410954] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.206 [2024-10-08 18:36:24.410960] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.206 [2024-10-08 18:36:24.410974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.206 qpair failed and we were unable to recover it. 00:28:31.206 [2024-10-08 18:36:24.420847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.206 [2024-10-08 18:36:24.420910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.206 [2024-10-08 18:36:24.420924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.206 [2024-10-08 18:36:24.420930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.206 [2024-10-08 18:36:24.420936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.206 [2024-10-08 18:36:24.420950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.206 qpair failed and we were unable to recover it. 00:28:31.206 [2024-10-08 18:36:24.430941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.206 [2024-10-08 18:36:24.431008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.206 [2024-10-08 18:36:24.431021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.206 [2024-10-08 18:36:24.431028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.206 [2024-10-08 18:36:24.431034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.207 [2024-10-08 18:36:24.431048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.207 qpair failed and we were unable to recover it. 00:28:31.207 [2024-10-08 18:36:24.440971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.207 [2024-10-08 18:36:24.441064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.207 [2024-10-08 18:36:24.441079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.207 [2024-10-08 18:36:24.441086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.207 [2024-10-08 18:36:24.441093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.207 [2024-10-08 18:36:24.441108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.207 qpair failed and we were unable to recover it. 00:28:31.207 [2024-10-08 18:36:24.450930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.207 [2024-10-08 18:36:24.450990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.207 [2024-10-08 18:36:24.451010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.207 [2024-10-08 18:36:24.451021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.207 [2024-10-08 18:36:24.451029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.207 [2024-10-08 18:36:24.451053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.207 qpair failed and we were unable to recover it. 00:28:31.207 [2024-10-08 18:36:24.460965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.207 [2024-10-08 18:36:24.461034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.207 [2024-10-08 18:36:24.461049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.207 [2024-10-08 18:36:24.461057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.207 [2024-10-08 18:36:24.461063] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.207 [2024-10-08 18:36:24.461079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.207 qpair failed and we were unable to recover it. 00:28:31.207 [2024-10-08 18:36:24.471017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.207 [2024-10-08 18:36:24.471118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.207 [2024-10-08 18:36:24.471133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.207 [2024-10-08 18:36:24.471141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.207 [2024-10-08 18:36:24.471147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.207 [2024-10-08 18:36:24.471162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.207 qpair failed and we were unable to recover it. 00:28:31.207 [2024-10-08 18:36:24.480969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.207 [2024-10-08 18:36:24.481023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.207 [2024-10-08 18:36:24.481038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.207 [2024-10-08 18:36:24.481045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.207 [2024-10-08 18:36:24.481051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.207 [2024-10-08 18:36:24.481066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.207 qpair failed and we were unable to recover it. 00:28:31.207 [2024-10-08 18:36:24.491085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.207 [2024-10-08 18:36:24.491135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.207 [2024-10-08 18:36:24.491148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.207 [2024-10-08 18:36:24.491155] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.207 [2024-10-08 18:36:24.491164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.207 [2024-10-08 18:36:24.491179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.207 qpair failed and we were unable to recover it. 00:28:31.207 [2024-10-08 18:36:24.501132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.207 [2024-10-08 18:36:24.501211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.207 [2024-10-08 18:36:24.501229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.207 [2024-10-08 18:36:24.501236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.207 [2024-10-08 18:36:24.501242] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.207 [2024-10-08 18:36:24.501257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.207 qpair failed and we were unable to recover it. 00:28:31.207 [2024-10-08 18:36:24.511124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.207 [2024-10-08 18:36:24.511180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.207 [2024-10-08 18:36:24.511194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.207 [2024-10-08 18:36:24.511201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.207 [2024-10-08 18:36:24.511207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.207 [2024-10-08 18:36:24.511222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.207 qpair failed and we were unable to recover it. 00:28:31.207 [2024-10-08 18:36:24.521182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.207 [2024-10-08 18:36:24.521234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.207 [2024-10-08 18:36:24.521247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.207 [2024-10-08 18:36:24.521253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.207 [2024-10-08 18:36:24.521260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.207 [2024-10-08 18:36:24.521275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.207 qpair failed and we were unable to recover it. 00:28:31.467 [2024-10-08 18:36:24.531154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.467 [2024-10-08 18:36:24.531218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.467 [2024-10-08 18:36:24.531231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.467 [2024-10-08 18:36:24.531238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.467 [2024-10-08 18:36:24.531244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.467 [2024-10-08 18:36:24.531259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.467 qpair failed and we were unable to recover it. 00:28:31.467 [2024-10-08 18:36:24.541146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.467 [2024-10-08 18:36:24.541209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.467 [2024-10-08 18:36:24.541223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.467 [2024-10-08 18:36:24.541230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.467 [2024-10-08 18:36:24.541235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.467 [2024-10-08 18:36:24.541250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.467 qpair failed and we were unable to recover it. 00:28:31.467 [2024-10-08 18:36:24.551235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.467 [2024-10-08 18:36:24.551286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.467 [2024-10-08 18:36:24.551299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.467 [2024-10-08 18:36:24.551306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.467 [2024-10-08 18:36:24.551312] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.467 [2024-10-08 18:36:24.551327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.467 qpair failed and we were unable to recover it. 00:28:31.467 [2024-10-08 18:36:24.561261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.467 [2024-10-08 18:36:24.561317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.467 [2024-10-08 18:36:24.561331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.467 [2024-10-08 18:36:24.561338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.467 [2024-10-08 18:36:24.561344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.467 [2024-10-08 18:36:24.561358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.467 qpair failed and we were unable to recover it. 00:28:31.467 [2024-10-08 18:36:24.571316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.467 [2024-10-08 18:36:24.571383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.467 [2024-10-08 18:36:24.571397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.467 [2024-10-08 18:36:24.571404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.467 [2024-10-08 18:36:24.571409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.467 [2024-10-08 18:36:24.571424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.467 qpair failed and we were unable to recover it. 00:28:31.467 [2024-10-08 18:36:24.581326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.467 [2024-10-08 18:36:24.581385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.467 [2024-10-08 18:36:24.581399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.467 [2024-10-08 18:36:24.581405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.467 [2024-10-08 18:36:24.581414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.467 [2024-10-08 18:36:24.581429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.467 qpair failed and we were unable to recover it. 00:28:31.468 [2024-10-08 18:36:24.591357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.468 [2024-10-08 18:36:24.591420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.468 [2024-10-08 18:36:24.591434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.468 [2024-10-08 18:36:24.591441] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.468 [2024-10-08 18:36:24.591447] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.468 [2024-10-08 18:36:24.591461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.468 qpair failed and we were unable to recover it. 00:28:31.468 [2024-10-08 18:36:24.601388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.468 [2024-10-08 18:36:24.601440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.468 [2024-10-08 18:36:24.601454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.468 [2024-10-08 18:36:24.601460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.468 [2024-10-08 18:36:24.601466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.468 [2024-10-08 18:36:24.601481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.468 qpair failed and we were unable to recover it. 00:28:31.468 [2024-10-08 18:36:24.611417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.468 [2024-10-08 18:36:24.611471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.468 [2024-10-08 18:36:24.611485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.468 [2024-10-08 18:36:24.611491] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.468 [2024-10-08 18:36:24.611497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.468 [2024-10-08 18:36:24.611511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.468 qpair failed and we were unable to recover it. 00:28:31.468 [2024-10-08 18:36:24.621400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.468 [2024-10-08 18:36:24.621457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.468 [2024-10-08 18:36:24.621471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.468 [2024-10-08 18:36:24.621478] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.468 [2024-10-08 18:36:24.621484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.468 [2024-10-08 18:36:24.621499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.468 qpair failed and we were unable to recover it. 00:28:31.468 [2024-10-08 18:36:24.631579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.468 [2024-10-08 18:36:24.631651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.468 [2024-10-08 18:36:24.631664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.468 [2024-10-08 18:36:24.631671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.468 [2024-10-08 18:36:24.631677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.468 [2024-10-08 18:36:24.631691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.468 qpair failed and we were unable to recover it. 00:28:31.468 [2024-10-08 18:36:24.641562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.468 [2024-10-08 18:36:24.641618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.468 [2024-10-08 18:36:24.641632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.468 [2024-10-08 18:36:24.641639] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.468 [2024-10-08 18:36:24.641645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.468 [2024-10-08 18:36:24.641659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.468 qpair failed and we were unable to recover it. 00:28:31.468 [2024-10-08 18:36:24.651612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.468 [2024-10-08 18:36:24.651663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.468 [2024-10-08 18:36:24.651676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.468 [2024-10-08 18:36:24.651683] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.468 [2024-10-08 18:36:24.651689] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.468 [2024-10-08 18:36:24.651703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.468 qpair failed and we were unable to recover it. 00:28:31.468 [2024-10-08 18:36:24.661552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.468 [2024-10-08 18:36:24.661614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.468 [2024-10-08 18:36:24.661628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.468 [2024-10-08 18:36:24.661635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.468 [2024-10-08 18:36:24.661640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.468 [2024-10-08 18:36:24.661655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.468 qpair failed and we were unable to recover it. 00:28:31.468 [2024-10-08 18:36:24.671609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.468 [2024-10-08 18:36:24.671664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.468 [2024-10-08 18:36:24.671677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.468 [2024-10-08 18:36:24.671688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.468 [2024-10-08 18:36:24.671693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.468 [2024-10-08 18:36:24.671708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.468 qpair failed and we were unable to recover it. 00:28:31.468 [2024-10-08 18:36:24.681601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.468 [2024-10-08 18:36:24.681657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.468 [2024-10-08 18:36:24.681671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.468 [2024-10-08 18:36:24.681678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.468 [2024-10-08 18:36:24.681684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.468 [2024-10-08 18:36:24.681698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.468 qpair failed and we were unable to recover it. 00:28:31.468 [2024-10-08 18:36:24.691651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.468 [2024-10-08 18:36:24.691702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.468 [2024-10-08 18:36:24.691715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.468 [2024-10-08 18:36:24.691722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.468 [2024-10-08 18:36:24.691728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.468 [2024-10-08 18:36:24.691742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.468 qpair failed and we were unable to recover it. 00:28:31.468 [2024-10-08 18:36:24.701686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.468 [2024-10-08 18:36:24.701741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.468 [2024-10-08 18:36:24.701754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.468 [2024-10-08 18:36:24.701763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.468 [2024-10-08 18:36:24.701772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.468 [2024-10-08 18:36:24.701791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.468 qpair failed and we were unable to recover it. 00:28:31.468 [2024-10-08 18:36:24.711719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.468 [2024-10-08 18:36:24.711778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.468 [2024-10-08 18:36:24.711793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.468 [2024-10-08 18:36:24.711801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.468 [2024-10-08 18:36:24.711807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.468 [2024-10-08 18:36:24.711822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.468 qpair failed and we were unable to recover it. 00:28:31.469 [2024-10-08 18:36:24.721738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.469 [2024-10-08 18:36:24.721800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.469 [2024-10-08 18:36:24.721814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.469 [2024-10-08 18:36:24.721821] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.469 [2024-10-08 18:36:24.721826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.469 [2024-10-08 18:36:24.721841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.469 qpair failed and we were unable to recover it. 00:28:31.469 [2024-10-08 18:36:24.731755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.469 [2024-10-08 18:36:24.731809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.469 [2024-10-08 18:36:24.731823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.469 [2024-10-08 18:36:24.731829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.469 [2024-10-08 18:36:24.731835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.469 [2024-10-08 18:36:24.731850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.469 qpair failed and we were unable to recover it. 00:28:31.469 [2024-10-08 18:36:24.741820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.469 [2024-10-08 18:36:24.741878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.469 [2024-10-08 18:36:24.741893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.469 [2024-10-08 18:36:24.741899] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.469 [2024-10-08 18:36:24.741905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.469 [2024-10-08 18:36:24.741919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.469 qpair failed and we were unable to recover it. 00:28:31.469 [2024-10-08 18:36:24.751819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.469 [2024-10-08 18:36:24.751872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.469 [2024-10-08 18:36:24.751885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.469 [2024-10-08 18:36:24.751892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.469 [2024-10-08 18:36:24.751898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.469 [2024-10-08 18:36:24.751913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.469 qpair failed and we were unable to recover it. 00:28:31.469 [2024-10-08 18:36:24.761846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.469 [2024-10-08 18:36:24.761932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.469 [2024-10-08 18:36:24.761947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.469 [2024-10-08 18:36:24.761957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.469 [2024-10-08 18:36:24.761963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.469 [2024-10-08 18:36:24.761978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.469 qpair failed and we were unable to recover it. 00:28:31.469 [2024-10-08 18:36:24.771864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.469 [2024-10-08 18:36:24.771918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.469 [2024-10-08 18:36:24.771932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.469 [2024-10-08 18:36:24.771938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.469 [2024-10-08 18:36:24.771944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.469 [2024-10-08 18:36:24.771959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.469 qpair failed and we were unable to recover it. 00:28:31.469 [2024-10-08 18:36:24.781899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.469 [2024-10-08 18:36:24.781958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.469 [2024-10-08 18:36:24.781971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.469 [2024-10-08 18:36:24.781978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.469 [2024-10-08 18:36:24.781984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.469 [2024-10-08 18:36:24.781998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.469 qpair failed and we were unable to recover it. 00:28:31.729 [2024-10-08 18:36:24.791929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.729 [2024-10-08 18:36:24.791982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.729 [2024-10-08 18:36:24.791996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.729 [2024-10-08 18:36:24.792003] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.729 [2024-10-08 18:36:24.792009] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.729 [2024-10-08 18:36:24.792024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.729 qpair failed and we were unable to recover it. 00:28:31.729 [2024-10-08 18:36:24.801954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.729 [2024-10-08 18:36:24.802009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.729 [2024-10-08 18:36:24.802023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.729 [2024-10-08 18:36:24.802029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.729 [2024-10-08 18:36:24.802035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.729 [2024-10-08 18:36:24.802050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.729 qpair failed and we were unable to recover it. 00:28:31.729 [2024-10-08 18:36:24.811981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.729 [2024-10-08 18:36:24.812031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.729 [2024-10-08 18:36:24.812044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.729 [2024-10-08 18:36:24.812051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.729 [2024-10-08 18:36:24.812057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.729 [2024-10-08 18:36:24.812071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.729 qpair failed and we were unable to recover it. 00:28:31.729 [2024-10-08 18:36:24.822027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.729 [2024-10-08 18:36:24.822079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.729 [2024-10-08 18:36:24.822093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.729 [2024-10-08 18:36:24.822100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.729 [2024-10-08 18:36:24.822106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.729 [2024-10-08 18:36:24.822120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.729 qpair failed and we were unable to recover it. 00:28:31.729 [2024-10-08 18:36:24.832085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.729 [2024-10-08 18:36:24.832144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.729 [2024-10-08 18:36:24.832158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.729 [2024-10-08 18:36:24.832164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.729 [2024-10-08 18:36:24.832170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.729 [2024-10-08 18:36:24.832185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.729 qpair failed and we were unable to recover it. 00:28:31.729 [2024-10-08 18:36:24.842069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.729 [2024-10-08 18:36:24.842135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.729 [2024-10-08 18:36:24.842149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.729 [2024-10-08 18:36:24.842156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.729 [2024-10-08 18:36:24.842162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.729 [2024-10-08 18:36:24.842176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.729 qpair failed and we were unable to recover it. 00:28:31.729 [2024-10-08 18:36:24.852095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.729 [2024-10-08 18:36:24.852148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.729 [2024-10-08 18:36:24.852164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.729 [2024-10-08 18:36:24.852172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.729 [2024-10-08 18:36:24.852177] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.729 [2024-10-08 18:36:24.852192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.729 qpair failed and we were unable to recover it. 00:28:31.729 [2024-10-08 18:36:24.862141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.729 [2024-10-08 18:36:24.862210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.729 [2024-10-08 18:36:24.862224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.729 [2024-10-08 18:36:24.862231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.729 [2024-10-08 18:36:24.862237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.729 [2024-10-08 18:36:24.862252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.729 qpair failed and we were unable to recover it. 00:28:31.729 [2024-10-08 18:36:24.872168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.729 [2024-10-08 18:36:24.872227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.729 [2024-10-08 18:36:24.872240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.729 [2024-10-08 18:36:24.872247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.729 [2024-10-08 18:36:24.872253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.729 [2024-10-08 18:36:24.872267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.729 qpair failed and we were unable to recover it. 00:28:31.729 [2024-10-08 18:36:24.882185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.729 [2024-10-08 18:36:24.882236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.729 [2024-10-08 18:36:24.882250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.729 [2024-10-08 18:36:24.882256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.729 [2024-10-08 18:36:24.882262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.729 [2024-10-08 18:36:24.882277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.730 qpair failed and we were unable to recover it. 00:28:31.730 [2024-10-08 18:36:24.892216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.730 [2024-10-08 18:36:24.892267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.730 [2024-10-08 18:36:24.892280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.730 [2024-10-08 18:36:24.892287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.730 [2024-10-08 18:36:24.892293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.730 [2024-10-08 18:36:24.892311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.730 qpair failed and we were unable to recover it. 00:28:31.730 [2024-10-08 18:36:24.902243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.730 [2024-10-08 18:36:24.902299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.730 [2024-10-08 18:36:24.902313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.730 [2024-10-08 18:36:24.902320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.730 [2024-10-08 18:36:24.902326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.730 [2024-10-08 18:36:24.902340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.730 qpair failed and we were unable to recover it. 00:28:31.730 [2024-10-08 18:36:24.912268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.730 [2024-10-08 18:36:24.912351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.730 [2024-10-08 18:36:24.912366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.730 [2024-10-08 18:36:24.912373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.730 [2024-10-08 18:36:24.912382] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.730 [2024-10-08 18:36:24.912397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.730 qpair failed and we were unable to recover it. 00:28:31.730 [2024-10-08 18:36:24.922290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.730 [2024-10-08 18:36:24.922344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.730 [2024-10-08 18:36:24.922357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.730 [2024-10-08 18:36:24.922364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.730 [2024-10-08 18:36:24.922370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.730 [2024-10-08 18:36:24.922389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.730 qpair failed and we were unable to recover it. 00:28:31.730 [2024-10-08 18:36:24.932320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.730 [2024-10-08 18:36:24.932372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.730 [2024-10-08 18:36:24.932390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.730 [2024-10-08 18:36:24.932397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.730 [2024-10-08 18:36:24.932403] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.730 [2024-10-08 18:36:24.932417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.730 qpair failed and we were unable to recover it. 00:28:31.730 [2024-10-08 18:36:24.942349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.730 [2024-10-08 18:36:24.942412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.730 [2024-10-08 18:36:24.942430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.730 [2024-10-08 18:36:24.942437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.730 [2024-10-08 18:36:24.942443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.730 [2024-10-08 18:36:24.942458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.730 qpair failed and we were unable to recover it. 00:28:31.730 [2024-10-08 18:36:24.952379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.730 [2024-10-08 18:36:24.952436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.730 [2024-10-08 18:36:24.952453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.730 [2024-10-08 18:36:24.952464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.730 [2024-10-08 18:36:24.952472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.730 [2024-10-08 18:36:24.952491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.730 qpair failed and we were unable to recover it. 00:28:31.730 [2024-10-08 18:36:24.962429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.730 [2024-10-08 18:36:24.962484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.730 [2024-10-08 18:36:24.962500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.730 [2024-10-08 18:36:24.962507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.730 [2024-10-08 18:36:24.962515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.730 [2024-10-08 18:36:24.962534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.730 qpair failed and we were unable to recover it. 00:28:31.730 [2024-10-08 18:36:24.972472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.730 [2024-10-08 18:36:24.972546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.730 [2024-10-08 18:36:24.972562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.730 [2024-10-08 18:36:24.972570] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.730 [2024-10-08 18:36:24.972576] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.730 [2024-10-08 18:36:24.972592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.730 qpair failed and we were unable to recover it. 00:28:31.730 [2024-10-08 18:36:24.982483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.730 [2024-10-08 18:36:24.982538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.730 [2024-10-08 18:36:24.982552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.730 [2024-10-08 18:36:24.982559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.730 [2024-10-08 18:36:24.982565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.730 [2024-10-08 18:36:24.982583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.730 qpair failed and we were unable to recover it. 00:28:31.730 [2024-10-08 18:36:24.992512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.730 [2024-10-08 18:36:24.992567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.730 [2024-10-08 18:36:24.992581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.730 [2024-10-08 18:36:24.992588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.730 [2024-10-08 18:36:24.992595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.730 [2024-10-08 18:36:24.992610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.730 qpair failed and we were unable to recover it. 00:28:31.730 [2024-10-08 18:36:25.002529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.730 [2024-10-08 18:36:25.002582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.730 [2024-10-08 18:36:25.002595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.730 [2024-10-08 18:36:25.002602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.730 [2024-10-08 18:36:25.002608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.730 [2024-10-08 18:36:25.002622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.730 qpair failed and we were unable to recover it. 00:28:31.730 [2024-10-08 18:36:25.012539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.730 [2024-10-08 18:36:25.012592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.730 [2024-10-08 18:36:25.012605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.730 [2024-10-08 18:36:25.012612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.730 [2024-10-08 18:36:25.012618] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.730 [2024-10-08 18:36:25.012632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.730 qpair failed and we were unable to recover it. 00:28:31.730 [2024-10-08 18:36:25.022584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.730 [2024-10-08 18:36:25.022644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.730 [2024-10-08 18:36:25.022657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.730 [2024-10-08 18:36:25.022664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.730 [2024-10-08 18:36:25.022670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.731 [2024-10-08 18:36:25.022684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.731 qpair failed and we were unable to recover it. 00:28:31.731 [2024-10-08 18:36:25.032598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.731 [2024-10-08 18:36:25.032655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.731 [2024-10-08 18:36:25.032672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.731 [2024-10-08 18:36:25.032679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.731 [2024-10-08 18:36:25.032685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.731 [2024-10-08 18:36:25.032700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.731 qpair failed and we were unable to recover it. 00:28:31.731 [2024-10-08 18:36:25.042655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.731 [2024-10-08 18:36:25.042712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.731 [2024-10-08 18:36:25.042726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.731 [2024-10-08 18:36:25.042733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.731 [2024-10-08 18:36:25.042739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.731 [2024-10-08 18:36:25.042754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.731 qpair failed and we were unable to recover it. 00:28:31.990 [2024-10-08 18:36:25.052672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.990 [2024-10-08 18:36:25.052725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.990 [2024-10-08 18:36:25.052739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.990 [2024-10-08 18:36:25.052746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.990 [2024-10-08 18:36:25.052752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.990 [2024-10-08 18:36:25.052766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-10-08 18:36:25.062759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.990 [2024-10-08 18:36:25.062821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.990 [2024-10-08 18:36:25.062835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.990 [2024-10-08 18:36:25.062842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.990 [2024-10-08 18:36:25.062848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.990 [2024-10-08 18:36:25.062863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-10-08 18:36:25.072754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.990 [2024-10-08 18:36:25.072829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.990 [2024-10-08 18:36:25.072846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.991 [2024-10-08 18:36:25.072853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.991 [2024-10-08 18:36:25.072862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.991 [2024-10-08 18:36:25.072876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-10-08 18:36:25.082764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.991 [2024-10-08 18:36:25.082823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.991 [2024-10-08 18:36:25.082837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.991 [2024-10-08 18:36:25.082843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.991 [2024-10-08 18:36:25.082849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.991 [2024-10-08 18:36:25.082864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-10-08 18:36:25.092756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.991 [2024-10-08 18:36:25.092807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.991 [2024-10-08 18:36:25.092820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.991 [2024-10-08 18:36:25.092826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.991 [2024-10-08 18:36:25.092832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.991 [2024-10-08 18:36:25.092847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-10-08 18:36:25.102802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.991 [2024-10-08 18:36:25.102855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.991 [2024-10-08 18:36:25.102869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.991 [2024-10-08 18:36:25.102875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.991 [2024-10-08 18:36:25.102881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.991 [2024-10-08 18:36:25.102895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-10-08 18:36:25.112830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.991 [2024-10-08 18:36:25.112887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.991 [2024-10-08 18:36:25.112900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.991 [2024-10-08 18:36:25.112907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.991 [2024-10-08 18:36:25.112912] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.991 [2024-10-08 18:36:25.112927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-10-08 18:36:25.122863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.991 [2024-10-08 18:36:25.122921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.991 [2024-10-08 18:36:25.122934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.991 [2024-10-08 18:36:25.122941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.991 [2024-10-08 18:36:25.122946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.991 [2024-10-08 18:36:25.122961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-10-08 18:36:25.132887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.991 [2024-10-08 18:36:25.132938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.991 [2024-10-08 18:36:25.132951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.991 [2024-10-08 18:36:25.132958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.991 [2024-10-08 18:36:25.132964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.991 [2024-10-08 18:36:25.132978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-10-08 18:36:25.142920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.991 [2024-10-08 18:36:25.142975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.991 [2024-10-08 18:36:25.142989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.991 [2024-10-08 18:36:25.142996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.991 [2024-10-08 18:36:25.143002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.991 [2024-10-08 18:36:25.143016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-10-08 18:36:25.152962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.991 [2024-10-08 18:36:25.153039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.991 [2024-10-08 18:36:25.153053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.991 [2024-10-08 18:36:25.153060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.991 [2024-10-08 18:36:25.153065] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.991 [2024-10-08 18:36:25.153079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-10-08 18:36:25.162974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.991 [2024-10-08 18:36:25.163025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.991 [2024-10-08 18:36:25.163039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.991 [2024-10-08 18:36:25.163049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.991 [2024-10-08 18:36:25.163055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.991 [2024-10-08 18:36:25.163069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-10-08 18:36:25.172996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.991 [2024-10-08 18:36:25.173093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.991 [2024-10-08 18:36:25.173106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.991 [2024-10-08 18:36:25.173113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.991 [2024-10-08 18:36:25.173119] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.991 [2024-10-08 18:36:25.173134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-10-08 18:36:25.183024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.991 [2024-10-08 18:36:25.183078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.991 [2024-10-08 18:36:25.183091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.991 [2024-10-08 18:36:25.183098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.991 [2024-10-08 18:36:25.183104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.991 [2024-10-08 18:36:25.183119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-10-08 18:36:25.193058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.991 [2024-10-08 18:36:25.193115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.991 [2024-10-08 18:36:25.193129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.991 [2024-10-08 18:36:25.193136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.991 [2024-10-08 18:36:25.193142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.991 [2024-10-08 18:36:25.193157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-10-08 18:36:25.203077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.991 [2024-10-08 18:36:25.203141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.991 [2024-10-08 18:36:25.203157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.991 [2024-10-08 18:36:25.203167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.991 [2024-10-08 18:36:25.203175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.991 [2024-10-08 18:36:25.203190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-10-08 18:36:25.213159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.991 [2024-10-08 18:36:25.213242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.991 [2024-10-08 18:36:25.213258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.992 [2024-10-08 18:36:25.213265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.992 [2024-10-08 18:36:25.213271] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.992 [2024-10-08 18:36:25.213286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-10-08 18:36:25.223141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.992 [2024-10-08 18:36:25.223218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.992 [2024-10-08 18:36:25.223232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.992 [2024-10-08 18:36:25.223239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.992 [2024-10-08 18:36:25.223245] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.992 [2024-10-08 18:36:25.223260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-10-08 18:36:25.233207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.992 [2024-10-08 18:36:25.233261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.992 [2024-10-08 18:36:25.233275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.992 [2024-10-08 18:36:25.233281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.992 [2024-10-08 18:36:25.233287] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.992 [2024-10-08 18:36:25.233302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-10-08 18:36:25.243191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.992 [2024-10-08 18:36:25.243244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.992 [2024-10-08 18:36:25.243258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.992 [2024-10-08 18:36:25.243265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.992 [2024-10-08 18:36:25.243271] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.992 [2024-10-08 18:36:25.243285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-10-08 18:36:25.253202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.992 [2024-10-08 18:36:25.253287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.992 [2024-10-08 18:36:25.253302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.992 [2024-10-08 18:36:25.253311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.992 [2024-10-08 18:36:25.253318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.992 [2024-10-08 18:36:25.253332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-10-08 18:36:25.263271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.992 [2024-10-08 18:36:25.263338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.992 [2024-10-08 18:36:25.263351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.992 [2024-10-08 18:36:25.263358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.992 [2024-10-08 18:36:25.263364] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.992 [2024-10-08 18:36:25.263381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-10-08 18:36:25.273296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.992 [2024-10-08 18:36:25.273350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.992 [2024-10-08 18:36:25.273364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.992 [2024-10-08 18:36:25.273370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.992 [2024-10-08 18:36:25.273379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.992 [2024-10-08 18:36:25.273393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-10-08 18:36:25.283335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.992 [2024-10-08 18:36:25.283426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.992 [2024-10-08 18:36:25.283440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.992 [2024-10-08 18:36:25.283447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.992 [2024-10-08 18:36:25.283453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.992 [2024-10-08 18:36:25.283467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-10-08 18:36:25.293356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.992 [2024-10-08 18:36:25.293441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.992 [2024-10-08 18:36:25.293455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.992 [2024-10-08 18:36:25.293462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.992 [2024-10-08 18:36:25.293468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.992 [2024-10-08 18:36:25.293482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-10-08 18:36:25.303411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.992 [2024-10-08 18:36:25.303486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.992 [2024-10-08 18:36:25.303499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.992 [2024-10-08 18:36:25.303506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.992 [2024-10-08 18:36:25.303512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:31.992 [2024-10-08 18:36:25.303526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.992 qpair failed and we were unable to recover it. 00:28:32.252 [2024-10-08 18:36:25.313432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.252 [2024-10-08 18:36:25.313516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.252 [2024-10-08 18:36:25.313530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.252 [2024-10-08 18:36:25.313538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.252 [2024-10-08 18:36:25.313545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.252 [2024-10-08 18:36:25.313559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.252 qpair failed and we were unable to recover it. 00:28:32.252 [2024-10-08 18:36:25.323430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.252 [2024-10-08 18:36:25.323483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.252 [2024-10-08 18:36:25.323496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.252 [2024-10-08 18:36:25.323502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.252 [2024-10-08 18:36:25.323509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.252 [2024-10-08 18:36:25.323522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.252 qpair failed and we were unable to recover it. 00:28:32.252 [2024-10-08 18:36:25.333487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.252 [2024-10-08 18:36:25.333547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.252 [2024-10-08 18:36:25.333561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.252 [2024-10-08 18:36:25.333567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.252 [2024-10-08 18:36:25.333573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.252 [2024-10-08 18:36:25.333587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.252 qpair failed and we were unable to recover it. 00:28:32.252 [2024-10-08 18:36:25.343437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.252 [2024-10-08 18:36:25.343492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.252 [2024-10-08 18:36:25.343510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.252 [2024-10-08 18:36:25.343518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.252 [2024-10-08 18:36:25.343525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.252 [2024-10-08 18:36:25.343540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.252 qpair failed and we were unable to recover it. 00:28:32.252 [2024-10-08 18:36:25.353557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.252 [2024-10-08 18:36:25.353611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.252 [2024-10-08 18:36:25.353624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.252 [2024-10-08 18:36:25.353631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.252 [2024-10-08 18:36:25.353637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.252 [2024-10-08 18:36:25.353651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.252 qpair failed and we were unable to recover it. 00:28:32.252 [2024-10-08 18:36:25.363555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.252 [2024-10-08 18:36:25.363614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.252 [2024-10-08 18:36:25.363627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.252 [2024-10-08 18:36:25.363634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.252 [2024-10-08 18:36:25.363640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.252 [2024-10-08 18:36:25.363654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.252 qpair failed and we were unable to recover it. 00:28:32.252 [2024-10-08 18:36:25.373585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.252 [2024-10-08 18:36:25.373651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.252 [2024-10-08 18:36:25.373664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.252 [2024-10-08 18:36:25.373670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.252 [2024-10-08 18:36:25.373677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.252 [2024-10-08 18:36:25.373691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.252 qpair failed and we were unable to recover it. 00:28:32.252 [2024-10-08 18:36:25.383603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.252 [2024-10-08 18:36:25.383697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.252 [2024-10-08 18:36:25.383712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.253 [2024-10-08 18:36:25.383719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.253 [2024-10-08 18:36:25.383726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.253 [2024-10-08 18:36:25.383744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.253 qpair failed and we were unable to recover it. 00:28:32.253 [2024-10-08 18:36:25.393722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.253 [2024-10-08 18:36:25.393806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.253 [2024-10-08 18:36:25.393820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.253 [2024-10-08 18:36:25.393827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.253 [2024-10-08 18:36:25.393833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.253 [2024-10-08 18:36:25.393848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.253 qpair failed and we were unable to recover it. 00:28:32.253 [2024-10-08 18:36:25.403658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.253 [2024-10-08 18:36:25.403711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.253 [2024-10-08 18:36:25.403724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.253 [2024-10-08 18:36:25.403731] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.253 [2024-10-08 18:36:25.403737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.253 [2024-10-08 18:36:25.403751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.253 qpair failed and we were unable to recover it. 00:28:32.253 [2024-10-08 18:36:25.413674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.253 [2024-10-08 18:36:25.413775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.253 [2024-10-08 18:36:25.413789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.253 [2024-10-08 18:36:25.413796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.253 [2024-10-08 18:36:25.413802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.253 [2024-10-08 18:36:25.413817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.253 qpair failed and we were unable to recover it. 00:28:32.253 [2024-10-08 18:36:25.423696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.253 [2024-10-08 18:36:25.423754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.253 [2024-10-08 18:36:25.423767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.253 [2024-10-08 18:36:25.423774] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.253 [2024-10-08 18:36:25.423780] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.253 [2024-10-08 18:36:25.423793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.253 qpair failed and we were unable to recover it. 00:28:32.253 [2024-10-08 18:36:25.433683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.253 [2024-10-08 18:36:25.433742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.253 [2024-10-08 18:36:25.433762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.253 [2024-10-08 18:36:25.433768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.253 [2024-10-08 18:36:25.433775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.253 [2024-10-08 18:36:25.433788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.253 qpair failed and we were unable to recover it. 00:28:32.253 [2024-10-08 18:36:25.443767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.253 [2024-10-08 18:36:25.443822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.253 [2024-10-08 18:36:25.443837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.253 [2024-10-08 18:36:25.443844] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.253 [2024-10-08 18:36:25.443851] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.253 [2024-10-08 18:36:25.443865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.253 qpair failed and we were unable to recover it. 00:28:32.253 [2024-10-08 18:36:25.453721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.253 [2024-10-08 18:36:25.453821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.253 [2024-10-08 18:36:25.453836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.253 [2024-10-08 18:36:25.453843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.253 [2024-10-08 18:36:25.453849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.253 [2024-10-08 18:36:25.453864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.253 qpair failed and we were unable to recover it. 00:28:32.253 [2024-10-08 18:36:25.463845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.253 [2024-10-08 18:36:25.463905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.253 [2024-10-08 18:36:25.463921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.253 [2024-10-08 18:36:25.463928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.253 [2024-10-08 18:36:25.463934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.253 [2024-10-08 18:36:25.463949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.253 qpair failed and we were unable to recover it. 00:28:32.253 [2024-10-08 18:36:25.473811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.253 [2024-10-08 18:36:25.473905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.253 [2024-10-08 18:36:25.473920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.253 [2024-10-08 18:36:25.473928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.253 [2024-10-08 18:36:25.473934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.253 [2024-10-08 18:36:25.473953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.253 qpair failed and we were unable to recover it. 00:28:32.253 [2024-10-08 18:36:25.483857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.253 [2024-10-08 18:36:25.483952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.253 [2024-10-08 18:36:25.483966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.253 [2024-10-08 18:36:25.483973] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.253 [2024-10-08 18:36:25.483980] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.253 [2024-10-08 18:36:25.483995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.253 qpair failed and we were unable to recover it. 00:28:32.253 [2024-10-08 18:36:25.493856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.253 [2024-10-08 18:36:25.493909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.253 [2024-10-08 18:36:25.493922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.253 [2024-10-08 18:36:25.493929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.253 [2024-10-08 18:36:25.493935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.253 [2024-10-08 18:36:25.493949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.253 qpair failed and we were unable to recover it. 00:28:32.253 [2024-10-08 18:36:25.503891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.253 [2024-10-08 18:36:25.503946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.253 [2024-10-08 18:36:25.503959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.253 [2024-10-08 18:36:25.503966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.253 [2024-10-08 18:36:25.503972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.253 [2024-10-08 18:36:25.503986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.253 qpair failed and we were unable to recover it. 00:28:32.253 [2024-10-08 18:36:25.513917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.253 [2024-10-08 18:36:25.513998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.253 [2024-10-08 18:36:25.514012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.253 [2024-10-08 18:36:25.514019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.253 [2024-10-08 18:36:25.514025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.253 [2024-10-08 18:36:25.514040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.253 qpair failed and we were unable to recover it. 00:28:32.253 [2024-10-08 18:36:25.523970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.253 [2024-10-08 18:36:25.524044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.254 [2024-10-08 18:36:25.524061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.254 [2024-10-08 18:36:25.524068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.254 [2024-10-08 18:36:25.524074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.254 [2024-10-08 18:36:25.524089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.254 qpair failed and we were unable to recover it. 00:28:32.254 [2024-10-08 18:36:25.533949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.254 [2024-10-08 18:36:25.534004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.254 [2024-10-08 18:36:25.534017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.254 [2024-10-08 18:36:25.534024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.254 [2024-10-08 18:36:25.534030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.254 [2024-10-08 18:36:25.534045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.254 qpair failed and we were unable to recover it. 00:28:32.254 [2024-10-08 18:36:25.544047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.254 [2024-10-08 18:36:25.544104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.254 [2024-10-08 18:36:25.544117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.254 [2024-10-08 18:36:25.544124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.254 [2024-10-08 18:36:25.544130] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.254 [2024-10-08 18:36:25.544145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.254 qpair failed and we were unable to recover it. 00:28:32.254 [2024-10-08 18:36:25.554072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.254 [2024-10-08 18:36:25.554131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.254 [2024-10-08 18:36:25.554144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.254 [2024-10-08 18:36:25.554151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.254 [2024-10-08 18:36:25.554157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.254 [2024-10-08 18:36:25.554172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.254 qpair failed and we were unable to recover it. 00:28:32.254 [2024-10-08 18:36:25.564069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.254 [2024-10-08 18:36:25.564149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.254 [2024-10-08 18:36:25.564163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.254 [2024-10-08 18:36:25.564170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.254 [2024-10-08 18:36:25.564179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.254 [2024-10-08 18:36:25.564194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.254 qpair failed and we were unable to recover it. 00:28:32.514 [2024-10-08 18:36:25.574098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.514 [2024-10-08 18:36:25.574152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.514 [2024-10-08 18:36:25.574165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.514 [2024-10-08 18:36:25.574172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.514 [2024-10-08 18:36:25.574178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.514 [2024-10-08 18:36:25.574193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-10-08 18:36:25.584164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.514 [2024-10-08 18:36:25.584221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.514 [2024-10-08 18:36:25.584236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.514 [2024-10-08 18:36:25.584243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.514 [2024-10-08 18:36:25.584249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.514 [2024-10-08 18:36:25.584264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-10-08 18:36:25.594214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.514 [2024-10-08 18:36:25.594272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.514 [2024-10-08 18:36:25.594285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.514 [2024-10-08 18:36:25.594291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.514 [2024-10-08 18:36:25.594297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.514 [2024-10-08 18:36:25.594312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-10-08 18:36:25.604246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.514 [2024-10-08 18:36:25.604337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.514 [2024-10-08 18:36:25.604352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.514 [2024-10-08 18:36:25.604359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.514 [2024-10-08 18:36:25.604365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.514 [2024-10-08 18:36:25.604384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-10-08 18:36:25.614206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.514 [2024-10-08 18:36:25.614262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.514 [2024-10-08 18:36:25.614275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.514 [2024-10-08 18:36:25.614282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.514 [2024-10-08 18:36:25.614288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.514 [2024-10-08 18:36:25.614302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-10-08 18:36:25.624283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.514 [2024-10-08 18:36:25.624369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.514 [2024-10-08 18:36:25.624387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.514 [2024-10-08 18:36:25.624394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.514 [2024-10-08 18:36:25.624400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.514 [2024-10-08 18:36:25.624414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-10-08 18:36:25.634269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.514 [2024-10-08 18:36:25.634322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.514 [2024-10-08 18:36:25.634335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.514 [2024-10-08 18:36:25.634342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.514 [2024-10-08 18:36:25.634348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.514 [2024-10-08 18:36:25.634362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-10-08 18:36:25.644341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.514 [2024-10-08 18:36:25.644396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.514 [2024-10-08 18:36:25.644410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.514 [2024-10-08 18:36:25.644417] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.514 [2024-10-08 18:36:25.644423] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.514 [2024-10-08 18:36:25.644438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-10-08 18:36:25.654331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.514 [2024-10-08 18:36:25.654421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.514 [2024-10-08 18:36:25.654434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.514 [2024-10-08 18:36:25.654441] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.514 [2024-10-08 18:36:25.654450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.514 [2024-10-08 18:36:25.654465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-10-08 18:36:25.664430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.514 [2024-10-08 18:36:25.664484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.514 [2024-10-08 18:36:25.664498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.514 [2024-10-08 18:36:25.664505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.514 [2024-10-08 18:36:25.664510] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.514 [2024-10-08 18:36:25.664524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-10-08 18:36:25.674405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.515 [2024-10-08 18:36:25.674460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.515 [2024-10-08 18:36:25.674473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.515 [2024-10-08 18:36:25.674480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.515 [2024-10-08 18:36:25.674486] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.515 [2024-10-08 18:36:25.674500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-10-08 18:36:25.684449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.515 [2024-10-08 18:36:25.684506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.515 [2024-10-08 18:36:25.684519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.515 [2024-10-08 18:36:25.684526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.515 [2024-10-08 18:36:25.684532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.515 [2024-10-08 18:36:25.684545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-10-08 18:36:25.694451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.515 [2024-10-08 18:36:25.694506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.515 [2024-10-08 18:36:25.694520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.515 [2024-10-08 18:36:25.694526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.515 [2024-10-08 18:36:25.694532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.515 [2024-10-08 18:36:25.694547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-10-08 18:36:25.704462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.515 [2024-10-08 18:36:25.704519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.515 [2024-10-08 18:36:25.704536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.515 [2024-10-08 18:36:25.704546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.515 [2024-10-08 18:36:25.704555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.515 [2024-10-08 18:36:25.704575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-10-08 18:36:25.714552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.515 [2024-10-08 18:36:25.714616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.515 [2024-10-08 18:36:25.714632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.515 [2024-10-08 18:36:25.714638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.515 [2024-10-08 18:36:25.714645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.515 [2024-10-08 18:36:25.714660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-10-08 18:36:25.724574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.515 [2024-10-08 18:36:25.724631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.515 [2024-10-08 18:36:25.724646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.515 [2024-10-08 18:36:25.724653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.515 [2024-10-08 18:36:25.724659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.515 [2024-10-08 18:36:25.724674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-10-08 18:36:25.734556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.515 [2024-10-08 18:36:25.734611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.515 [2024-10-08 18:36:25.734624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.515 [2024-10-08 18:36:25.734631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.515 [2024-10-08 18:36:25.734637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.515 [2024-10-08 18:36:25.734651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-10-08 18:36:25.744604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.515 [2024-10-08 18:36:25.744677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.515 [2024-10-08 18:36:25.744691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.515 [2024-10-08 18:36:25.744702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.515 [2024-10-08 18:36:25.744707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.515 [2024-10-08 18:36:25.744722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-10-08 18:36:25.754646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.515 [2024-10-08 18:36:25.754702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.515 [2024-10-08 18:36:25.754716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.515 [2024-10-08 18:36:25.754723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.515 [2024-10-08 18:36:25.754729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.515 [2024-10-08 18:36:25.754743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-10-08 18:36:25.764687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.515 [2024-10-08 18:36:25.764742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.515 [2024-10-08 18:36:25.764755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.515 [2024-10-08 18:36:25.764762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.515 [2024-10-08 18:36:25.764768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.515 [2024-10-08 18:36:25.764781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-10-08 18:36:25.774717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.515 [2024-10-08 18:36:25.774773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.515 [2024-10-08 18:36:25.774786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.515 [2024-10-08 18:36:25.774793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.515 [2024-10-08 18:36:25.774799] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.515 [2024-10-08 18:36:25.774814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-10-08 18:36:25.784752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.515 [2024-10-08 18:36:25.784820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.515 [2024-10-08 18:36:25.784833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.515 [2024-10-08 18:36:25.784840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.515 [2024-10-08 18:36:25.784847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.515 [2024-10-08 18:36:25.784861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-10-08 18:36:25.794784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.515 [2024-10-08 18:36:25.794843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.515 [2024-10-08 18:36:25.794856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.515 [2024-10-08 18:36:25.794863] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.515 [2024-10-08 18:36:25.794869] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.515 [2024-10-08 18:36:25.794884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-10-08 18:36:25.804812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.515 [2024-10-08 18:36:25.804863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.515 [2024-10-08 18:36:25.804877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.515 [2024-10-08 18:36:25.804884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.515 [2024-10-08 18:36:25.804890] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.516 [2024-10-08 18:36:25.804904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-10-08 18:36:25.814767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.516 [2024-10-08 18:36:25.814821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.516 [2024-10-08 18:36:25.814834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.516 [2024-10-08 18:36:25.814840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.516 [2024-10-08 18:36:25.814846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.516 [2024-10-08 18:36:25.814860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-10-08 18:36:25.824907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.516 [2024-10-08 18:36:25.824983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.516 [2024-10-08 18:36:25.824996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.516 [2024-10-08 18:36:25.825004] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.516 [2024-10-08 18:36:25.825010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.516 [2024-10-08 18:36:25.825024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.776 [2024-10-08 18:36:25.834898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.776 [2024-10-08 18:36:25.834952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.776 [2024-10-08 18:36:25.834965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.776 [2024-10-08 18:36:25.834977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.776 [2024-10-08 18:36:25.834982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.776 [2024-10-08 18:36:25.834997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.776 qpair failed and we were unable to recover it. 00:28:32.776 [2024-10-08 18:36:25.844925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.776 [2024-10-08 18:36:25.844977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.776 [2024-10-08 18:36:25.844991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.776 [2024-10-08 18:36:25.844998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.776 [2024-10-08 18:36:25.845004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.776 [2024-10-08 18:36:25.845018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.776 qpair failed and we were unable to recover it. 00:28:32.776 [2024-10-08 18:36:25.854948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.776 [2024-10-08 18:36:25.854998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.776 [2024-10-08 18:36:25.855011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.776 [2024-10-08 18:36:25.855018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.776 [2024-10-08 18:36:25.855023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.776 [2024-10-08 18:36:25.855037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.776 qpair failed and we were unable to recover it. 00:28:32.776 [2024-10-08 18:36:25.865010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.776 [2024-10-08 18:36:25.865067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.776 [2024-10-08 18:36:25.865080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.776 [2024-10-08 18:36:25.865087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.776 [2024-10-08 18:36:25.865092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.776 [2024-10-08 18:36:25.865107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.776 qpair failed and we were unable to recover it. 00:28:32.776 [2024-10-08 18:36:25.875015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.776 [2024-10-08 18:36:25.875067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.776 [2024-10-08 18:36:25.875080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.776 [2024-10-08 18:36:25.875087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.776 [2024-10-08 18:36:25.875093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.776 [2024-10-08 18:36:25.875107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.776 qpair failed and we were unable to recover it. 00:28:32.776 [2024-10-08 18:36:25.885036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.776 [2024-10-08 18:36:25.885090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.776 [2024-10-08 18:36:25.885103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.776 [2024-10-08 18:36:25.885110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.776 [2024-10-08 18:36:25.885116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.776 [2024-10-08 18:36:25.885131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.776 qpair failed and we were unable to recover it. 00:28:32.776 [2024-10-08 18:36:25.895091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.776 [2024-10-08 18:36:25.895149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.776 [2024-10-08 18:36:25.895162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.776 [2024-10-08 18:36:25.895169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.776 [2024-10-08 18:36:25.895175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.776 [2024-10-08 18:36:25.895189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.777 qpair failed and we were unable to recover it. 00:28:32.777 [2024-10-08 18:36:25.905095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.777 [2024-10-08 18:36:25.905148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.777 [2024-10-08 18:36:25.905161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.777 [2024-10-08 18:36:25.905168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.777 [2024-10-08 18:36:25.905174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.777 [2024-10-08 18:36:25.905188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.777 qpair failed and we were unable to recover it. 00:28:32.777 [2024-10-08 18:36:25.915074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.777 [2024-10-08 18:36:25.915128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.777 [2024-10-08 18:36:25.915141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.777 [2024-10-08 18:36:25.915148] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.777 [2024-10-08 18:36:25.915154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.777 [2024-10-08 18:36:25.915168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.777 qpair failed and we were unable to recover it. 00:28:32.777 [2024-10-08 18:36:25.925160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.777 [2024-10-08 18:36:25.925213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.777 [2024-10-08 18:36:25.925237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.777 [2024-10-08 18:36:25.925244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.777 [2024-10-08 18:36:25.925251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.777 [2024-10-08 18:36:25.925270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.777 qpair failed and we were unable to recover it. 00:28:32.777 [2024-10-08 18:36:25.935187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.777 [2024-10-08 18:36:25.935237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.777 [2024-10-08 18:36:25.935251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.777 [2024-10-08 18:36:25.935258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.777 [2024-10-08 18:36:25.935264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.777 [2024-10-08 18:36:25.935278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.777 qpair failed and we were unable to recover it. 00:28:32.777 [2024-10-08 18:36:25.945207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.777 [2024-10-08 18:36:25.945266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.777 [2024-10-08 18:36:25.945280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.777 [2024-10-08 18:36:25.945287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.777 [2024-10-08 18:36:25.945293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.777 [2024-10-08 18:36:25.945307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.777 qpair failed and we were unable to recover it. 00:28:32.777 [2024-10-08 18:36:25.955255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.777 [2024-10-08 18:36:25.955317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.777 [2024-10-08 18:36:25.955334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.777 [2024-10-08 18:36:25.955345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.777 [2024-10-08 18:36:25.955352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.777 [2024-10-08 18:36:25.955368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.777 qpair failed and we were unable to recover it. 00:28:32.777 [2024-10-08 18:36:25.965301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.777 [2024-10-08 18:36:25.965358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.777 [2024-10-08 18:36:25.965373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.777 [2024-10-08 18:36:25.965386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.777 [2024-10-08 18:36:25.965393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.777 [2024-10-08 18:36:25.965412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.777 qpair failed and we were unable to recover it. 00:28:32.777 [2024-10-08 18:36:25.975323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.777 [2024-10-08 18:36:25.975382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.777 [2024-10-08 18:36:25.975399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.777 [2024-10-08 18:36:25.975407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.777 [2024-10-08 18:36:25.975413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.777 [2024-10-08 18:36:25.975428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.777 qpair failed and we were unable to recover it. 00:28:32.777 [2024-10-08 18:36:25.985338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.777 [2024-10-08 18:36:25.985397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.777 [2024-10-08 18:36:25.985411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.777 [2024-10-08 18:36:25.985418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.777 [2024-10-08 18:36:25.985424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.777 [2024-10-08 18:36:25.985439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.777 qpair failed and we were unable to recover it. 00:28:32.777 [2024-10-08 18:36:25.995370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.777 [2024-10-08 18:36:25.995429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.777 [2024-10-08 18:36:25.995443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.777 [2024-10-08 18:36:25.995450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.777 [2024-10-08 18:36:25.995455] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.777 [2024-10-08 18:36:25.995470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.777 qpair failed and we were unable to recover it. 00:28:32.777 [2024-10-08 18:36:26.005400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.777 [2024-10-08 18:36:26.005453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.777 [2024-10-08 18:36:26.005467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.777 [2024-10-08 18:36:26.005474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.777 [2024-10-08 18:36:26.005480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.777 [2024-10-08 18:36:26.005494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.777 qpair failed and we were unable to recover it. 00:28:32.777 [2024-10-08 18:36:26.015420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.777 [2024-10-08 18:36:26.015497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.777 [2024-10-08 18:36:26.015514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.777 [2024-10-08 18:36:26.015521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.777 [2024-10-08 18:36:26.015527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.777 [2024-10-08 18:36:26.015541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.777 qpair failed and we were unable to recover it. 00:28:32.777 [2024-10-08 18:36:26.025467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.777 [2024-10-08 18:36:26.025541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.777 [2024-10-08 18:36:26.025555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.777 [2024-10-08 18:36:26.025562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.777 [2024-10-08 18:36:26.025568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.777 [2024-10-08 18:36:26.025582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.777 qpair failed and we were unable to recover it. 00:28:32.778 [2024-10-08 18:36:26.035498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.778 [2024-10-08 18:36:26.035552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.778 [2024-10-08 18:36:26.035565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.778 [2024-10-08 18:36:26.035572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.778 [2024-10-08 18:36:26.035577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.778 [2024-10-08 18:36:26.035591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.778 qpair failed and we were unable to recover it. 00:28:32.778 [2024-10-08 18:36:26.045537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.778 [2024-10-08 18:36:26.045624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.778 [2024-10-08 18:36:26.045638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.778 [2024-10-08 18:36:26.045645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.778 [2024-10-08 18:36:26.045651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.778 [2024-10-08 18:36:26.045666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.778 qpair failed and we were unable to recover it. 00:28:32.778 [2024-10-08 18:36:26.055504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.778 [2024-10-08 18:36:26.055554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.778 [2024-10-08 18:36:26.055567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.778 [2024-10-08 18:36:26.055574] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.778 [2024-10-08 18:36:26.055582] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.778 [2024-10-08 18:36:26.055596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.778 qpair failed and we were unable to recover it. 00:28:32.778 [2024-10-08 18:36:26.065568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.778 [2024-10-08 18:36:26.065626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.778 [2024-10-08 18:36:26.065639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.778 [2024-10-08 18:36:26.065646] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.778 [2024-10-08 18:36:26.065652] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.778 [2024-10-08 18:36:26.065666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.778 qpair failed and we were unable to recover it. 00:28:32.778 [2024-10-08 18:36:26.075618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.778 [2024-10-08 18:36:26.075669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.778 [2024-10-08 18:36:26.075682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.778 [2024-10-08 18:36:26.075689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.778 [2024-10-08 18:36:26.075695] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.778 [2024-10-08 18:36:26.075710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.778 qpair failed and we were unable to recover it. 00:28:32.778 [2024-10-08 18:36:26.085627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.778 [2024-10-08 18:36:26.085680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.778 [2024-10-08 18:36:26.085693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.778 [2024-10-08 18:36:26.085700] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.778 [2024-10-08 18:36:26.085706] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.778 [2024-10-08 18:36:26.085720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.778 qpair failed and we were unable to recover it. 00:28:32.778 [2024-10-08 18:36:26.095700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.778 [2024-10-08 18:36:26.095781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.778 [2024-10-08 18:36:26.095795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.778 [2024-10-08 18:36:26.095802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.778 [2024-10-08 18:36:26.095808] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:32.778 [2024-10-08 18:36:26.095823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.778 qpair failed and we were unable to recover it. 00:28:33.038 [2024-10-08 18:36:26.105699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.038 [2024-10-08 18:36:26.105802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.038 [2024-10-08 18:36:26.105817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.038 [2024-10-08 18:36:26.105824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.038 [2024-10-08 18:36:26.105830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.038 [2024-10-08 18:36:26.105845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.038 qpair failed and we were unable to recover it. 00:28:33.038 [2024-10-08 18:36:26.115714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.038 [2024-10-08 18:36:26.115765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.038 [2024-10-08 18:36:26.115778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.038 [2024-10-08 18:36:26.115785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.038 [2024-10-08 18:36:26.115791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.038 [2024-10-08 18:36:26.115805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.038 qpair failed and we were unable to recover it. 00:28:33.038 [2024-10-08 18:36:26.125750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.038 [2024-10-08 18:36:26.125803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.038 [2024-10-08 18:36:26.125817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.038 [2024-10-08 18:36:26.125824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.038 [2024-10-08 18:36:26.125830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.038 [2024-10-08 18:36:26.125844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.038 qpair failed and we were unable to recover it. 00:28:33.038 [2024-10-08 18:36:26.135780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.038 [2024-10-08 18:36:26.135838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.038 [2024-10-08 18:36:26.135851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.038 [2024-10-08 18:36:26.135858] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.038 [2024-10-08 18:36:26.135864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.038 [2024-10-08 18:36:26.135879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.038 qpair failed and we were unable to recover it. 00:28:33.039 [2024-10-08 18:36:26.145805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.039 [2024-10-08 18:36:26.145865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.039 [2024-10-08 18:36:26.145879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.039 [2024-10-08 18:36:26.145887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.039 [2024-10-08 18:36:26.145896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.039 [2024-10-08 18:36:26.145911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.039 qpair failed and we were unable to recover it. 00:28:33.039 [2024-10-08 18:36:26.155843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.039 [2024-10-08 18:36:26.155936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.039 [2024-10-08 18:36:26.155950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.039 [2024-10-08 18:36:26.155957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.039 [2024-10-08 18:36:26.155963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.039 [2024-10-08 18:36:26.155978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.039 qpair failed and we were unable to recover it. 00:28:33.039 [2024-10-08 18:36:26.165852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.039 [2024-10-08 18:36:26.165905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.039 [2024-10-08 18:36:26.165918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.039 [2024-10-08 18:36:26.165925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.039 [2024-10-08 18:36:26.165931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.039 [2024-10-08 18:36:26.165947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.039 qpair failed and we were unable to recover it. 00:28:33.039 [2024-10-08 18:36:26.175874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.039 [2024-10-08 18:36:26.175927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.039 [2024-10-08 18:36:26.175940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.039 [2024-10-08 18:36:26.175947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.039 [2024-10-08 18:36:26.175953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.039 [2024-10-08 18:36:26.175967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.039 qpair failed and we were unable to recover it. 00:28:33.039 [2024-10-08 18:36:26.185916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.039 [2024-10-08 18:36:26.185971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.039 [2024-10-08 18:36:26.185984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.039 [2024-10-08 18:36:26.185991] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.039 [2024-10-08 18:36:26.185997] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.039 [2024-10-08 18:36:26.186011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.039 qpair failed and we were unable to recover it. 00:28:33.039 [2024-10-08 18:36:26.195977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.039 [2024-10-08 18:36:26.196064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.039 [2024-10-08 18:36:26.196078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.039 [2024-10-08 18:36:26.196085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.039 [2024-10-08 18:36:26.196091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.039 [2024-10-08 18:36:26.196106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.039 qpair failed and we were unable to recover it. 00:28:33.039 [2024-10-08 18:36:26.205965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.039 [2024-10-08 18:36:26.206019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.039 [2024-10-08 18:36:26.206036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.039 [2024-10-08 18:36:26.206046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.039 [2024-10-08 18:36:26.206054] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.039 [2024-10-08 18:36:26.206071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.039 qpair failed and we were unable to recover it. 00:28:33.039 [2024-10-08 18:36:26.216064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.039 [2024-10-08 18:36:26.216128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.039 [2024-10-08 18:36:26.216143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.039 [2024-10-08 18:36:26.216150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.039 [2024-10-08 18:36:26.216156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.039 [2024-10-08 18:36:26.216172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.039 qpair failed and we were unable to recover it. 00:28:33.039 [2024-10-08 18:36:26.225994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.039 [2024-10-08 18:36:26.226051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.039 [2024-10-08 18:36:26.226065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.039 [2024-10-08 18:36:26.226072] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.039 [2024-10-08 18:36:26.226078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.039 [2024-10-08 18:36:26.226093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.039 qpair failed and we were unable to recover it. 00:28:33.039 [2024-10-08 18:36:26.236037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.039 [2024-10-08 18:36:26.236089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.039 [2024-10-08 18:36:26.236103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.039 [2024-10-08 18:36:26.236118] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.039 [2024-10-08 18:36:26.236123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.039 [2024-10-08 18:36:26.236138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.039 qpair failed and we were unable to recover it. 00:28:33.039 [2024-10-08 18:36:26.246078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.039 [2024-10-08 18:36:26.246130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.039 [2024-10-08 18:36:26.246143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.039 [2024-10-08 18:36:26.246150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.039 [2024-10-08 18:36:26.246157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.039 [2024-10-08 18:36:26.246173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.039 qpair failed and we were unable to recover it. 00:28:33.039 [2024-10-08 18:36:26.256101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.039 [2024-10-08 18:36:26.256155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.039 [2024-10-08 18:36:26.256169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.039 [2024-10-08 18:36:26.256176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.039 [2024-10-08 18:36:26.256182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.039 [2024-10-08 18:36:26.256196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.039 qpair failed and we were unable to recover it. 00:28:33.039 [2024-10-08 18:36:26.266133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.039 [2024-10-08 18:36:26.266188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.039 [2024-10-08 18:36:26.266202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.039 [2024-10-08 18:36:26.266209] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.039 [2024-10-08 18:36:26.266215] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.039 [2024-10-08 18:36:26.266229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.039 qpair failed and we were unable to recover it. 00:28:33.039 [2024-10-08 18:36:26.276167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.039 [2024-10-08 18:36:26.276221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.039 [2024-10-08 18:36:26.276234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.039 [2024-10-08 18:36:26.276241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.039 [2024-10-08 18:36:26.276247] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.039 [2024-10-08 18:36:26.276262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.039 qpair failed and we were unable to recover it. 00:28:33.039 [2024-10-08 18:36:26.286227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.039 [2024-10-08 18:36:26.286290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.040 [2024-10-08 18:36:26.286304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.040 [2024-10-08 18:36:26.286311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.040 [2024-10-08 18:36:26.286318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.040 [2024-10-08 18:36:26.286332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.040 qpair failed and we were unable to recover it. 00:28:33.040 [2024-10-08 18:36:26.296190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.040 [2024-10-08 18:36:26.296276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.040 [2024-10-08 18:36:26.296290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.040 [2024-10-08 18:36:26.296297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.040 [2024-10-08 18:36:26.296303] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.040 [2024-10-08 18:36:26.296318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.040 qpair failed and we were unable to recover it. 00:28:33.040 [2024-10-08 18:36:26.306257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.040 [2024-10-08 18:36:26.306312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.040 [2024-10-08 18:36:26.306325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.040 [2024-10-08 18:36:26.306332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.040 [2024-10-08 18:36:26.306338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.040 [2024-10-08 18:36:26.306352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.040 qpair failed and we were unable to recover it. 00:28:33.040 [2024-10-08 18:36:26.316287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.040 [2024-10-08 18:36:26.316340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.040 [2024-10-08 18:36:26.316354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.040 [2024-10-08 18:36:26.316361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.040 [2024-10-08 18:36:26.316367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.040 [2024-10-08 18:36:26.316385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.040 qpair failed and we were unable to recover it. 00:28:33.040 [2024-10-08 18:36:26.326298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.040 [2024-10-08 18:36:26.326379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.040 [2024-10-08 18:36:26.326393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.040 [2024-10-08 18:36:26.326403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.040 [2024-10-08 18:36:26.326409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.040 [2024-10-08 18:36:26.326424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.040 qpair failed and we were unable to recover it. 00:28:33.040 [2024-10-08 18:36:26.336324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.040 [2024-10-08 18:36:26.336374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.040 [2024-10-08 18:36:26.336393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.040 [2024-10-08 18:36:26.336400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.040 [2024-10-08 18:36:26.336406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.040 [2024-10-08 18:36:26.336421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.040 qpair failed and we were unable to recover it. 00:28:33.040 [2024-10-08 18:36:26.346446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.040 [2024-10-08 18:36:26.346526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.040 [2024-10-08 18:36:26.346541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.040 [2024-10-08 18:36:26.346548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.040 [2024-10-08 18:36:26.346555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.040 [2024-10-08 18:36:26.346569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.040 qpair failed and we were unable to recover it. 00:28:33.040 [2024-10-08 18:36:26.356396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.040 [2024-10-08 18:36:26.356450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.040 [2024-10-08 18:36:26.356463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.040 [2024-10-08 18:36:26.356470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.040 [2024-10-08 18:36:26.356476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.040 [2024-10-08 18:36:26.356491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.040 qpair failed and we were unable to recover it. 00:28:33.300 [2024-10-08 18:36:26.366423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.300 [2024-10-08 18:36:26.366478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.300 [2024-10-08 18:36:26.366492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.300 [2024-10-08 18:36:26.366499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.300 [2024-10-08 18:36:26.366505] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.300 [2024-10-08 18:36:26.366520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-10-08 18:36:26.376441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.300 [2024-10-08 18:36:26.376520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.300 [2024-10-08 18:36:26.376534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.300 [2024-10-08 18:36:26.376541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.300 [2024-10-08 18:36:26.376547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.300 [2024-10-08 18:36:26.376561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-10-08 18:36:26.386490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.300 [2024-10-08 18:36:26.386546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.300 [2024-10-08 18:36:26.386559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.300 [2024-10-08 18:36:26.386566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.300 [2024-10-08 18:36:26.386571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.300 [2024-10-08 18:36:26.386586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-10-08 18:36:26.396536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.300 [2024-10-08 18:36:26.396618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.300 [2024-10-08 18:36:26.396633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.300 [2024-10-08 18:36:26.396640] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.300 [2024-10-08 18:36:26.396646] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.300 [2024-10-08 18:36:26.396661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-10-08 18:36:26.406568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.300 [2024-10-08 18:36:26.406620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.300 [2024-10-08 18:36:26.406632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.300 [2024-10-08 18:36:26.406639] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.300 [2024-10-08 18:36:26.406645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.300 [2024-10-08 18:36:26.406660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-10-08 18:36:26.416569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.300 [2024-10-08 18:36:26.416622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.301 [2024-10-08 18:36:26.416638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.301 [2024-10-08 18:36:26.416645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.301 [2024-10-08 18:36:26.416651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.301 [2024-10-08 18:36:26.416665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-10-08 18:36:26.426604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.301 [2024-10-08 18:36:26.426660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.301 [2024-10-08 18:36:26.426673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.301 [2024-10-08 18:36:26.426679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.301 [2024-10-08 18:36:26.426685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.301 [2024-10-08 18:36:26.426700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-10-08 18:36:26.436658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.301 [2024-10-08 18:36:26.436722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.301 [2024-10-08 18:36:26.436737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.301 [2024-10-08 18:36:26.436745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.301 [2024-10-08 18:36:26.436752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.301 [2024-10-08 18:36:26.436768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-10-08 18:36:26.446701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.301 [2024-10-08 18:36:26.446761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.301 [2024-10-08 18:36:26.446775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.301 [2024-10-08 18:36:26.446782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.301 [2024-10-08 18:36:26.446788] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.301 [2024-10-08 18:36:26.446803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-10-08 18:36:26.456718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.301 [2024-10-08 18:36:26.456781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.301 [2024-10-08 18:36:26.456797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.301 [2024-10-08 18:36:26.456807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.301 [2024-10-08 18:36:26.456814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.301 [2024-10-08 18:36:26.456833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-10-08 18:36:26.466735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.301 [2024-10-08 18:36:26.466835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.301 [2024-10-08 18:36:26.466851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.301 [2024-10-08 18:36:26.466859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.301 [2024-10-08 18:36:26.466865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.301 [2024-10-08 18:36:26.466881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-10-08 18:36:26.476732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.301 [2024-10-08 18:36:26.476791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.301 [2024-10-08 18:36:26.476805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.301 [2024-10-08 18:36:26.476812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.301 [2024-10-08 18:36:26.476818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.301 [2024-10-08 18:36:26.476833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-10-08 18:36:26.486753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.301 [2024-10-08 18:36:26.486807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.301 [2024-10-08 18:36:26.486821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.301 [2024-10-08 18:36:26.486828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.301 [2024-10-08 18:36:26.486834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.301 [2024-10-08 18:36:26.486849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-10-08 18:36:26.496781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.301 [2024-10-08 18:36:26.496833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.301 [2024-10-08 18:36:26.496846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.301 [2024-10-08 18:36:26.496852] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.301 [2024-10-08 18:36:26.496858] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.301 [2024-10-08 18:36:26.496872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-10-08 18:36:26.506798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.301 [2024-10-08 18:36:26.506856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.301 [2024-10-08 18:36:26.506871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.301 [2024-10-08 18:36:26.506878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.301 [2024-10-08 18:36:26.506884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.301 [2024-10-08 18:36:26.506898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-10-08 18:36:26.516841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.301 [2024-10-08 18:36:26.516895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.301 [2024-10-08 18:36:26.516908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.301 [2024-10-08 18:36:26.516914] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.301 [2024-10-08 18:36:26.516920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.301 [2024-10-08 18:36:26.516934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-10-08 18:36:26.526871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.301 [2024-10-08 18:36:26.526950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.301 [2024-10-08 18:36:26.526963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.301 [2024-10-08 18:36:26.526970] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.301 [2024-10-08 18:36:26.526976] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.301 [2024-10-08 18:36:26.526990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-10-08 18:36:26.536892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.301 [2024-10-08 18:36:26.536947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.301 [2024-10-08 18:36:26.536960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.301 [2024-10-08 18:36:26.536967] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.301 [2024-10-08 18:36:26.536973] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.301 [2024-10-08 18:36:26.536988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-10-08 18:36:26.546942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.301 [2024-10-08 18:36:26.547004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.301 [2024-10-08 18:36:26.547019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.301 [2024-10-08 18:36:26.547026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.301 [2024-10-08 18:36:26.547032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.301 [2024-10-08 18:36:26.547050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-10-08 18:36:26.556956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.301 [2024-10-08 18:36:26.557011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.302 [2024-10-08 18:36:26.557024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.302 [2024-10-08 18:36:26.557031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.302 [2024-10-08 18:36:26.557036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.302 [2024-10-08 18:36:26.557051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-10-08 18:36:26.566984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.302 [2024-10-08 18:36:26.567050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.302 [2024-10-08 18:36:26.567064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.302 [2024-10-08 18:36:26.567070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.302 [2024-10-08 18:36:26.567077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.302 [2024-10-08 18:36:26.567091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-10-08 18:36:26.576998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.302 [2024-10-08 18:36:26.577097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.302 [2024-10-08 18:36:26.577111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.302 [2024-10-08 18:36:26.577118] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.302 [2024-10-08 18:36:26.577124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.302 [2024-10-08 18:36:26.577138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-10-08 18:36:26.587035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.302 [2024-10-08 18:36:26.587094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.302 [2024-10-08 18:36:26.587108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.302 [2024-10-08 18:36:26.587115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.302 [2024-10-08 18:36:26.587120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.302 [2024-10-08 18:36:26.587135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-10-08 18:36:26.597070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.302 [2024-10-08 18:36:26.597126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.302 [2024-10-08 18:36:26.597142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.302 [2024-10-08 18:36:26.597149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.302 [2024-10-08 18:36:26.597155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.302 [2024-10-08 18:36:26.597169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-10-08 18:36:26.607111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.302 [2024-10-08 18:36:26.607164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.302 [2024-10-08 18:36:26.607178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.302 [2024-10-08 18:36:26.607184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.302 [2024-10-08 18:36:26.607190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.302 [2024-10-08 18:36:26.607204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-10-08 18:36:26.617119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.302 [2024-10-08 18:36:26.617165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.302 [2024-10-08 18:36:26.617178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.302 [2024-10-08 18:36:26.617185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.302 [2024-10-08 18:36:26.617191] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.302 [2024-10-08 18:36:26.617205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.562 [2024-10-08 18:36:26.627189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.562 [2024-10-08 18:36:26.627270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.562 [2024-10-08 18:36:26.627286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.562 [2024-10-08 18:36:26.627293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.562 [2024-10-08 18:36:26.627299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.562 [2024-10-08 18:36:26.627313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.562 qpair failed and we were unable to recover it. 00:28:33.562 [2024-10-08 18:36:26.637235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.562 [2024-10-08 18:36:26.637297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.562 [2024-10-08 18:36:26.637310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.562 [2024-10-08 18:36:26.637317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.562 [2024-10-08 18:36:26.637326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.562 [2024-10-08 18:36:26.637340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.562 qpair failed and we were unable to recover it. 00:28:33.562 [2024-10-08 18:36:26.647245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.562 [2024-10-08 18:36:26.647299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.562 [2024-10-08 18:36:26.647314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.562 [2024-10-08 18:36:26.647320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.562 [2024-10-08 18:36:26.647326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.562 [2024-10-08 18:36:26.647341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.562 qpair failed and we were unable to recover it. 00:28:33.562 [2024-10-08 18:36:26.657310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.562 [2024-10-08 18:36:26.657362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.562 [2024-10-08 18:36:26.657379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.562 [2024-10-08 18:36:26.657387] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.562 [2024-10-08 18:36:26.657393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.562 [2024-10-08 18:36:26.657407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.562 qpair failed and we were unable to recover it. 00:28:33.562 [2024-10-08 18:36:26.667321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.562 [2024-10-08 18:36:26.667403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.562 [2024-10-08 18:36:26.667417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.562 [2024-10-08 18:36:26.667424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.562 [2024-10-08 18:36:26.667430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.562 [2024-10-08 18:36:26.667444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.562 qpair failed and we were unable to recover it. 00:28:33.562 [2024-10-08 18:36:26.677294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.562 [2024-10-08 18:36:26.677346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.562 [2024-10-08 18:36:26.677359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.562 [2024-10-08 18:36:26.677366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.562 [2024-10-08 18:36:26.677372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.562 [2024-10-08 18:36:26.677391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.562 qpair failed and we were unable to recover it. 00:28:33.562 [2024-10-08 18:36:26.687372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.562 [2024-10-08 18:36:26.687440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.562 [2024-10-08 18:36:26.687454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.562 [2024-10-08 18:36:26.687460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.562 [2024-10-08 18:36:26.687467] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.562 [2024-10-08 18:36:26.687481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.562 qpair failed and we were unable to recover it. 00:28:33.562 [2024-10-08 18:36:26.697390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.562 [2024-10-08 18:36:26.697447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.562 [2024-10-08 18:36:26.697460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.562 [2024-10-08 18:36:26.697467] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.562 [2024-10-08 18:36:26.697473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.562 [2024-10-08 18:36:26.697487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.562 qpair failed and we were unable to recover it. 00:28:33.562 [2024-10-08 18:36:26.707396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.562 [2024-10-08 18:36:26.707453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.562 [2024-10-08 18:36:26.707470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.562 [2024-10-08 18:36:26.707481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.562 [2024-10-08 18:36:26.707490] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.562 [2024-10-08 18:36:26.707510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.562 qpair failed and we were unable to recover it. 00:28:33.562 [2024-10-08 18:36:26.717425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.562 [2024-10-08 18:36:26.717487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.562 [2024-10-08 18:36:26.717502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.562 [2024-10-08 18:36:26.717510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.562 [2024-10-08 18:36:26.717516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.562 [2024-10-08 18:36:26.717531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.562 qpair failed and we were unable to recover it. 00:28:33.562 [2024-10-08 18:36:26.727438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.562 [2024-10-08 18:36:26.727493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.562 [2024-10-08 18:36:26.727508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.562 [2024-10-08 18:36:26.727517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.562 [2024-10-08 18:36:26.727524] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.563 [2024-10-08 18:36:26.727538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.563 qpair failed and we were unable to recover it. 00:28:33.563 [2024-10-08 18:36:26.737497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.563 [2024-10-08 18:36:26.737552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.563 [2024-10-08 18:36:26.737566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.563 [2024-10-08 18:36:26.737573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.563 [2024-10-08 18:36:26.737579] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.563 [2024-10-08 18:36:26.737594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.563 qpair failed and we were unable to recover it. 00:28:33.563 [2024-10-08 18:36:26.747513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.563 [2024-10-08 18:36:26.747570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.563 [2024-10-08 18:36:26.747585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.563 [2024-10-08 18:36:26.747591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.563 [2024-10-08 18:36:26.747597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.563 [2024-10-08 18:36:26.747612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.563 qpair failed and we were unable to recover it. 00:28:33.563 [2024-10-08 18:36:26.757542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.563 [2024-10-08 18:36:26.757599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.563 [2024-10-08 18:36:26.757612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.563 [2024-10-08 18:36:26.757619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.563 [2024-10-08 18:36:26.757625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.563 [2024-10-08 18:36:26.757641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.563 qpair failed and we were unable to recover it. 00:28:33.563 [2024-10-08 18:36:26.767570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.563 [2024-10-08 18:36:26.767622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.563 [2024-10-08 18:36:26.767636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.563 [2024-10-08 18:36:26.767642] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.563 [2024-10-08 18:36:26.767649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.563 [2024-10-08 18:36:26.767663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.563 qpair failed and we were unable to recover it. 00:28:33.563 [2024-10-08 18:36:26.777598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.563 [2024-10-08 18:36:26.777652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.563 [2024-10-08 18:36:26.777665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.563 [2024-10-08 18:36:26.777672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.563 [2024-10-08 18:36:26.777678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.563 [2024-10-08 18:36:26.777694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.563 qpair failed and we were unable to recover it. 00:28:33.563 [2024-10-08 18:36:26.787639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.563 [2024-10-08 18:36:26.787699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.563 [2024-10-08 18:36:26.787713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.563 [2024-10-08 18:36:26.787720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.563 [2024-10-08 18:36:26.787725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.563 [2024-10-08 18:36:26.787740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.563 qpair failed and we were unable to recover it. 00:28:33.563 [2024-10-08 18:36:26.797672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.563 [2024-10-08 18:36:26.797727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.563 [2024-10-08 18:36:26.797740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.563 [2024-10-08 18:36:26.797747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.563 [2024-10-08 18:36:26.797753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.563 [2024-10-08 18:36:26.797767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.563 qpair failed and we were unable to recover it. 00:28:33.563 [2024-10-08 18:36:26.807698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.563 [2024-10-08 18:36:26.807755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.563 [2024-10-08 18:36:26.807768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.563 [2024-10-08 18:36:26.807775] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.563 [2024-10-08 18:36:26.807782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.563 [2024-10-08 18:36:26.807797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.563 qpair failed and we were unable to recover it. 00:28:33.563 [2024-10-08 18:36:26.817699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.563 [2024-10-08 18:36:26.817797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.563 [2024-10-08 18:36:26.817811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.563 [2024-10-08 18:36:26.817822] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.563 [2024-10-08 18:36:26.817829] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.563 [2024-10-08 18:36:26.817843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.563 qpair failed and we were unable to recover it. 00:28:33.563 [2024-10-08 18:36:26.827682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.563 [2024-10-08 18:36:26.827739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.563 [2024-10-08 18:36:26.827754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.563 [2024-10-08 18:36:26.827762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.563 [2024-10-08 18:36:26.827769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.563 [2024-10-08 18:36:26.827785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.563 qpair failed and we were unable to recover it. 00:28:33.563 [2024-10-08 18:36:26.837747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.563 [2024-10-08 18:36:26.837807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.563 [2024-10-08 18:36:26.837821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.563 [2024-10-08 18:36:26.837828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.563 [2024-10-08 18:36:26.837833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.563 [2024-10-08 18:36:26.837848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.563 qpair failed and we were unable to recover it. 00:28:33.563 [2024-10-08 18:36:26.847800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.563 [2024-10-08 18:36:26.847879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.563 [2024-10-08 18:36:26.847895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.563 [2024-10-08 18:36:26.847902] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.563 [2024-10-08 18:36:26.847908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.563 [2024-10-08 18:36:26.847922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.563 qpair failed and we were unable to recover it. 00:28:33.563 [2024-10-08 18:36:26.857837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.563 [2024-10-08 18:36:26.857923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.563 [2024-10-08 18:36:26.857937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.563 [2024-10-08 18:36:26.857945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.563 [2024-10-08 18:36:26.857951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.563 [2024-10-08 18:36:26.857965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.563 qpair failed and we were unable to recover it. 00:28:33.563 [2024-10-08 18:36:26.867891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.563 [2024-10-08 18:36:26.867971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.563 [2024-10-08 18:36:26.867985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.563 [2024-10-08 18:36:26.867992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.563 [2024-10-08 18:36:26.867998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.563 [2024-10-08 18:36:26.868012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.564 qpair failed and we were unable to recover it. 00:28:33.564 [2024-10-08 18:36:26.877881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.564 [2024-10-08 18:36:26.877967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.564 [2024-10-08 18:36:26.877981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.564 [2024-10-08 18:36:26.877989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.564 [2024-10-08 18:36:26.877995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.564 [2024-10-08 18:36:26.878009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.564 qpair failed and we were unable to recover it. 00:28:33.824 [2024-10-08 18:36:26.887876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.824 [2024-10-08 18:36:26.887937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.824 [2024-10-08 18:36:26.887950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.824 [2024-10-08 18:36:26.887957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.824 [2024-10-08 18:36:26.887964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.824 [2024-10-08 18:36:26.887978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-10-08 18:36:26.897881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.824 [2024-10-08 18:36:26.897971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.824 [2024-10-08 18:36:26.897986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.824 [2024-10-08 18:36:26.897993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.824 [2024-10-08 18:36:26.897999] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.824 [2024-10-08 18:36:26.898014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-10-08 18:36:26.907986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.824 [2024-10-08 18:36:26.908042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.824 [2024-10-08 18:36:26.908058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.824 [2024-10-08 18:36:26.908065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.824 [2024-10-08 18:36:26.908071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.824 [2024-10-08 18:36:26.908086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-10-08 18:36:26.917989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.824 [2024-10-08 18:36:26.918044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.824 [2024-10-08 18:36:26.918058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.824 [2024-10-08 18:36:26.918064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.824 [2024-10-08 18:36:26.918070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.824 [2024-10-08 18:36:26.918084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-10-08 18:36:26.927981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.824 [2024-10-08 18:36:26.928063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.824 [2024-10-08 18:36:26.928077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.824 [2024-10-08 18:36:26.928084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.824 [2024-10-08 18:36:26.928090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.824 [2024-10-08 18:36:26.928105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-10-08 18:36:26.938006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.824 [2024-10-08 18:36:26.938074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.824 [2024-10-08 18:36:26.938086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.824 [2024-10-08 18:36:26.938093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.824 [2024-10-08 18:36:26.938099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.824 [2024-10-08 18:36:26.938114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-10-08 18:36:26.948002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.824 [2024-10-08 18:36:26.948059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.824 [2024-10-08 18:36:26.948072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.824 [2024-10-08 18:36:26.948079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.824 [2024-10-08 18:36:26.948085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.824 [2024-10-08 18:36:26.948107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-10-08 18:36:26.958105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.824 [2024-10-08 18:36:26.958200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.824 [2024-10-08 18:36:26.958214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.824 [2024-10-08 18:36:26.958221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.824 [2024-10-08 18:36:26.958227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.824 [2024-10-08 18:36:26.958243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-10-08 18:36:26.968124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.824 [2024-10-08 18:36:26.968182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.824 [2024-10-08 18:36:26.968197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.824 [2024-10-08 18:36:26.968204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.824 [2024-10-08 18:36:26.968210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.824 [2024-10-08 18:36:26.968225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-10-08 18:36:26.978085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.824 [2024-10-08 18:36:26.978140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.824 [2024-10-08 18:36:26.978154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.824 [2024-10-08 18:36:26.978161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.824 [2024-10-08 18:36:26.978167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.824 [2024-10-08 18:36:26.978182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-10-08 18:36:26.988213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.825 [2024-10-08 18:36:26.988269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.825 [2024-10-08 18:36:26.988283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.825 [2024-10-08 18:36:26.988290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.825 [2024-10-08 18:36:26.988296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.825 [2024-10-08 18:36:26.988311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-10-08 18:36:26.998178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.825 [2024-10-08 18:36:26.998236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.825 [2024-10-08 18:36:26.998253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.825 [2024-10-08 18:36:26.998260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.825 [2024-10-08 18:36:26.998266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.825 [2024-10-08 18:36:26.998281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-10-08 18:36:27.008235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.825 [2024-10-08 18:36:27.008317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.825 [2024-10-08 18:36:27.008332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.825 [2024-10-08 18:36:27.008340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.825 [2024-10-08 18:36:27.008346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.825 [2024-10-08 18:36:27.008361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-10-08 18:36:27.018247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.825 [2024-10-08 18:36:27.018312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.825 [2024-10-08 18:36:27.018325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.825 [2024-10-08 18:36:27.018332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.825 [2024-10-08 18:36:27.018338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.825 [2024-10-08 18:36:27.018352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-10-08 18:36:27.028250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.825 [2024-10-08 18:36:27.028316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.825 [2024-10-08 18:36:27.028329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.825 [2024-10-08 18:36:27.028336] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.825 [2024-10-08 18:36:27.028342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.825 [2024-10-08 18:36:27.028356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-10-08 18:36:27.038313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.825 [2024-10-08 18:36:27.038366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.825 [2024-10-08 18:36:27.038384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.825 [2024-10-08 18:36:27.038391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.825 [2024-10-08 18:36:27.038397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.825 [2024-10-08 18:36:27.038414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-10-08 18:36:27.048309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.825 [2024-10-08 18:36:27.048404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.825 [2024-10-08 18:36:27.048419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.825 [2024-10-08 18:36:27.048425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.825 [2024-10-08 18:36:27.048431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.825 [2024-10-08 18:36:27.048446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-10-08 18:36:27.058374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.825 [2024-10-08 18:36:27.058433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.825 [2024-10-08 18:36:27.058446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.825 [2024-10-08 18:36:27.058453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.825 [2024-10-08 18:36:27.058459] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.825 [2024-10-08 18:36:27.058474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-10-08 18:36:27.068400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.825 [2024-10-08 18:36:27.068456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.825 [2024-10-08 18:36:27.068470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.825 [2024-10-08 18:36:27.068477] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.825 [2024-10-08 18:36:27.068483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.825 [2024-10-08 18:36:27.068497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-10-08 18:36:27.078454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.825 [2024-10-08 18:36:27.078554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.825 [2024-10-08 18:36:27.078569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.825 [2024-10-08 18:36:27.078575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.825 [2024-10-08 18:36:27.078581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.825 [2024-10-08 18:36:27.078596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-10-08 18:36:27.088471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.825 [2024-10-08 18:36:27.088524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.825 [2024-10-08 18:36:27.088541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.825 [2024-10-08 18:36:27.088548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.825 [2024-10-08 18:36:27.088553] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.825 [2024-10-08 18:36:27.088568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-10-08 18:36:27.098476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.825 [2024-10-08 18:36:27.098529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.825 [2024-10-08 18:36:27.098541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.825 [2024-10-08 18:36:27.098548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.825 [2024-10-08 18:36:27.098554] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.825 [2024-10-08 18:36:27.098569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-10-08 18:36:27.108541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.825 [2024-10-08 18:36:27.108600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.825 [2024-10-08 18:36:27.108613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.825 [2024-10-08 18:36:27.108620] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.825 [2024-10-08 18:36:27.108626] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.825 [2024-10-08 18:36:27.108640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-10-08 18:36:27.118551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.825 [2024-10-08 18:36:27.118606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.825 [2024-10-08 18:36:27.118619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.825 [2024-10-08 18:36:27.118626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.825 [2024-10-08 18:36:27.118632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.825 [2024-10-08 18:36:27.118646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-10-08 18:36:27.128512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.825 [2024-10-08 18:36:27.128568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.826 [2024-10-08 18:36:27.128581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.826 [2024-10-08 18:36:27.128588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.826 [2024-10-08 18:36:27.128597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.826 [2024-10-08 18:36:27.128612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-10-08 18:36:27.138532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.826 [2024-10-08 18:36:27.138584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.826 [2024-10-08 18:36:27.138597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.826 [2024-10-08 18:36:27.138603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.826 [2024-10-08 18:36:27.138609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:33.826 [2024-10-08 18:36:27.138624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.826 qpair failed and we were unable to recover it. 00:28:34.085 [2024-10-08 18:36:27.148652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.085 [2024-10-08 18:36:27.148729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.085 [2024-10-08 18:36:27.148743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.086 [2024-10-08 18:36:27.148750] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.086 [2024-10-08 18:36:27.148756] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.086 [2024-10-08 18:36:27.148771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.086 qpair failed and we were unable to recover it. 00:28:34.086 [2024-10-08 18:36:27.158658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.086 [2024-10-08 18:36:27.158715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.086 [2024-10-08 18:36:27.158729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.086 [2024-10-08 18:36:27.158736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.086 [2024-10-08 18:36:27.158741] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.086 [2024-10-08 18:36:27.158756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.086 qpair failed and we were unable to recover it. 00:28:34.086 [2024-10-08 18:36:27.168610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.086 [2024-10-08 18:36:27.168662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.086 [2024-10-08 18:36:27.168676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.086 [2024-10-08 18:36:27.168682] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.086 [2024-10-08 18:36:27.168688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.086 [2024-10-08 18:36:27.168703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.086 qpair failed and we were unable to recover it. 00:28:34.086 [2024-10-08 18:36:27.178678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.086 [2024-10-08 18:36:27.178732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.086 [2024-10-08 18:36:27.178745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.086 [2024-10-08 18:36:27.178751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.086 [2024-10-08 18:36:27.178757] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.086 [2024-10-08 18:36:27.178771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.086 qpair failed and we were unable to recover it. 00:28:34.086 [2024-10-08 18:36:27.188675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.086 [2024-10-08 18:36:27.188760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.086 [2024-10-08 18:36:27.188775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.086 [2024-10-08 18:36:27.188782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.086 [2024-10-08 18:36:27.188788] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.086 [2024-10-08 18:36:27.188803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.086 qpair failed and we were unable to recover it. 00:28:34.086 [2024-10-08 18:36:27.198718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.086 [2024-10-08 18:36:27.198804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.086 [2024-10-08 18:36:27.198819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.086 [2024-10-08 18:36:27.198826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.086 [2024-10-08 18:36:27.198832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.086 [2024-10-08 18:36:27.198847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.086 qpair failed and we were unable to recover it. 00:28:34.086 [2024-10-08 18:36:27.208804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.086 [2024-10-08 18:36:27.208861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.086 [2024-10-08 18:36:27.208879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.086 [2024-10-08 18:36:27.208889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.086 [2024-10-08 18:36:27.208898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.086 [2024-10-08 18:36:27.208914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.086 qpair failed and we were unable to recover it. 00:28:34.086 [2024-10-08 18:36:27.218851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.086 [2024-10-08 18:36:27.218910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.086 [2024-10-08 18:36:27.218925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.086 [2024-10-08 18:36:27.218932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.086 [2024-10-08 18:36:27.218941] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.086 [2024-10-08 18:36:27.218957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.086 qpair failed and we were unable to recover it. 00:28:34.086 [2024-10-08 18:36:27.228872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.086 [2024-10-08 18:36:27.228929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.086 [2024-10-08 18:36:27.228943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.086 [2024-10-08 18:36:27.228950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.086 [2024-10-08 18:36:27.228956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.086 [2024-10-08 18:36:27.228971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.086 qpair failed and we were unable to recover it. 00:28:34.086 [2024-10-08 18:36:27.238835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.086 [2024-10-08 18:36:27.238887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.086 [2024-10-08 18:36:27.238900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.086 [2024-10-08 18:36:27.238907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.086 [2024-10-08 18:36:27.238913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.086 [2024-10-08 18:36:27.238927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.086 qpair failed and we were unable to recover it. 00:28:34.086 [2024-10-08 18:36:27.248922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.086 [2024-10-08 18:36:27.248973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.086 [2024-10-08 18:36:27.248987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.086 [2024-10-08 18:36:27.248994] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.086 [2024-10-08 18:36:27.249000] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.086 [2024-10-08 18:36:27.249014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.086 qpair failed and we were unable to recover it. 00:28:34.086 [2024-10-08 18:36:27.258943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.086 [2024-10-08 18:36:27.258998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.086 [2024-10-08 18:36:27.259012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.086 [2024-10-08 18:36:27.259018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.086 [2024-10-08 18:36:27.259024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.086 [2024-10-08 18:36:27.259038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.086 qpair failed and we were unable to recover it. 00:28:34.086 [2024-10-08 18:36:27.268983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.086 [2024-10-08 18:36:27.269038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.086 [2024-10-08 18:36:27.269051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.086 [2024-10-08 18:36:27.269058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.086 [2024-10-08 18:36:27.269064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.086 [2024-10-08 18:36:27.269078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.086 qpair failed and we were unable to recover it. 00:28:34.086 [2024-10-08 18:36:27.279012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.086 [2024-10-08 18:36:27.279071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.086 [2024-10-08 18:36:27.279084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.086 [2024-10-08 18:36:27.279091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.086 [2024-10-08 18:36:27.279097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.086 [2024-10-08 18:36:27.279111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.086 qpair failed and we were unable to recover it. 00:28:34.087 [2024-10-08 18:36:27.289037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.087 [2024-10-08 18:36:27.289115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.087 [2024-10-08 18:36:27.289129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.087 [2024-10-08 18:36:27.289136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.087 [2024-10-08 18:36:27.289142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.087 [2024-10-08 18:36:27.289156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.087 qpair failed and we were unable to recover it. 00:28:34.087 [2024-10-08 18:36:27.299055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.087 [2024-10-08 18:36:27.299109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.087 [2024-10-08 18:36:27.299122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.087 [2024-10-08 18:36:27.299128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.087 [2024-10-08 18:36:27.299134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.087 [2024-10-08 18:36:27.299149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.087 qpair failed and we were unable to recover it. 00:28:34.087 [2024-10-08 18:36:27.309099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.087 [2024-10-08 18:36:27.309155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.087 [2024-10-08 18:36:27.309169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.087 [2024-10-08 18:36:27.309178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.087 [2024-10-08 18:36:27.309184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.087 [2024-10-08 18:36:27.309198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.087 qpair failed and we were unable to recover it. 00:28:34.087 [2024-10-08 18:36:27.319105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.087 [2024-10-08 18:36:27.319157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.087 [2024-10-08 18:36:27.319171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.087 [2024-10-08 18:36:27.319178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.087 [2024-10-08 18:36:27.319184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.087 [2024-10-08 18:36:27.319199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.087 qpair failed and we were unable to recover it. 00:28:34.087 [2024-10-08 18:36:27.329151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.087 [2024-10-08 18:36:27.329205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.087 [2024-10-08 18:36:27.329218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.087 [2024-10-08 18:36:27.329224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.087 [2024-10-08 18:36:27.329230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.087 [2024-10-08 18:36:27.329244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.087 qpair failed and we were unable to recover it. 00:28:34.087 [2024-10-08 18:36:27.339169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.087 [2024-10-08 18:36:27.339261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.087 [2024-10-08 18:36:27.339275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.087 [2024-10-08 18:36:27.339282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.087 [2024-10-08 18:36:27.339288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.087 [2024-10-08 18:36:27.339303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.087 qpair failed and we were unable to recover it. 00:28:34.087 [2024-10-08 18:36:27.349190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.087 [2024-10-08 18:36:27.349247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.087 [2024-10-08 18:36:27.349261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.087 [2024-10-08 18:36:27.349268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.087 [2024-10-08 18:36:27.349274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.087 [2024-10-08 18:36:27.349288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.087 qpair failed and we were unable to recover it. 00:28:34.087 [2024-10-08 18:36:27.359293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.087 [2024-10-08 18:36:27.359352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.087 [2024-10-08 18:36:27.359365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.087 [2024-10-08 18:36:27.359372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.087 [2024-10-08 18:36:27.359382] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.087 [2024-10-08 18:36:27.359397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.087 qpair failed and we were unable to recover it. 00:28:34.087 [2024-10-08 18:36:27.369261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.087 [2024-10-08 18:36:27.369315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.087 [2024-10-08 18:36:27.369328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.087 [2024-10-08 18:36:27.369335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.087 [2024-10-08 18:36:27.369341] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.087 [2024-10-08 18:36:27.369355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.087 qpair failed and we were unable to recover it. 00:28:34.087 [2024-10-08 18:36:27.379276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.087 [2024-10-08 18:36:27.379332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.087 [2024-10-08 18:36:27.379346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.087 [2024-10-08 18:36:27.379353] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.087 [2024-10-08 18:36:27.379359] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.087 [2024-10-08 18:36:27.379373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.087 qpair failed and we were unable to recover it. 00:28:34.087 [2024-10-08 18:36:27.389320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.087 [2024-10-08 18:36:27.389373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.087 [2024-10-08 18:36:27.389389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.087 [2024-10-08 18:36:27.389396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.087 [2024-10-08 18:36:27.389402] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.087 [2024-10-08 18:36:27.389417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.087 qpair failed and we were unable to recover it. 00:28:34.087 [2024-10-08 18:36:27.399395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.087 [2024-10-08 18:36:27.399499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.087 [2024-10-08 18:36:27.399512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.087 [2024-10-08 18:36:27.399522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.087 [2024-10-08 18:36:27.399528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.087 [2024-10-08 18:36:27.399543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.087 qpair failed and we were unable to recover it. 00:28:34.347 [2024-10-08 18:36:27.409373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.347 [2024-10-08 18:36:27.409429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.347 [2024-10-08 18:36:27.409443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.347 [2024-10-08 18:36:27.409450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.347 [2024-10-08 18:36:27.409456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.347 [2024-10-08 18:36:27.409470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-10-08 18:36:27.419427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.347 [2024-10-08 18:36:27.419478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.347 [2024-10-08 18:36:27.419491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.347 [2024-10-08 18:36:27.419498] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.347 [2024-10-08 18:36:27.419504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.347 [2024-10-08 18:36:27.419518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-10-08 18:36:27.429455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.347 [2024-10-08 18:36:27.429512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.347 [2024-10-08 18:36:27.429525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.347 [2024-10-08 18:36:27.429532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.347 [2024-10-08 18:36:27.429538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.347 [2024-10-08 18:36:27.429552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-10-08 18:36:27.439490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.347 [2024-10-08 18:36:27.439543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.347 [2024-10-08 18:36:27.439556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.347 [2024-10-08 18:36:27.439563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.347 [2024-10-08 18:36:27.439569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.347 [2024-10-08 18:36:27.439584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-10-08 18:36:27.449499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.347 [2024-10-08 18:36:27.449593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.347 [2024-10-08 18:36:27.449607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.347 [2024-10-08 18:36:27.449614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.347 [2024-10-08 18:36:27.449620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.347 [2024-10-08 18:36:27.449635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-10-08 18:36:27.459522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.347 [2024-10-08 18:36:27.459575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.347 [2024-10-08 18:36:27.459592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.347 [2024-10-08 18:36:27.459602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.347 [2024-10-08 18:36:27.459611] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.347 [2024-10-08 18:36:27.459628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-10-08 18:36:27.469594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.347 [2024-10-08 18:36:27.469650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.347 [2024-10-08 18:36:27.469665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.347 [2024-10-08 18:36:27.469672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.347 [2024-10-08 18:36:27.469679] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.347 [2024-10-08 18:36:27.469694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-10-08 18:36:27.479574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.347 [2024-10-08 18:36:27.479634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.347 [2024-10-08 18:36:27.479648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.347 [2024-10-08 18:36:27.479654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.347 [2024-10-08 18:36:27.479660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.347 [2024-10-08 18:36:27.479675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-10-08 18:36:27.489609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.347 [2024-10-08 18:36:27.489662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.347 [2024-10-08 18:36:27.489680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.347 [2024-10-08 18:36:27.489686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.347 [2024-10-08 18:36:27.489692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.347 [2024-10-08 18:36:27.489706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-10-08 18:36:27.499644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.347 [2024-10-08 18:36:27.499705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.347 [2024-10-08 18:36:27.499718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.347 [2024-10-08 18:36:27.499724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.347 [2024-10-08 18:36:27.499730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.347 [2024-10-08 18:36:27.499745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-10-08 18:36:27.509697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.347 [2024-10-08 18:36:27.509771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.348 [2024-10-08 18:36:27.509785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.348 [2024-10-08 18:36:27.509792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.348 [2024-10-08 18:36:27.509798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.348 [2024-10-08 18:36:27.509812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-10-08 18:36:27.519704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.348 [2024-10-08 18:36:27.519761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.348 [2024-10-08 18:36:27.519774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.348 [2024-10-08 18:36:27.519781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.348 [2024-10-08 18:36:27.519787] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.348 [2024-10-08 18:36:27.519801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-10-08 18:36:27.529666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.348 [2024-10-08 18:36:27.529722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.348 [2024-10-08 18:36:27.529736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.348 [2024-10-08 18:36:27.529743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.348 [2024-10-08 18:36:27.529749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.348 [2024-10-08 18:36:27.529767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-10-08 18:36:27.539753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.348 [2024-10-08 18:36:27.539809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.348 [2024-10-08 18:36:27.539822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.348 [2024-10-08 18:36:27.539829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.348 [2024-10-08 18:36:27.539834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.348 [2024-10-08 18:36:27.539849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-10-08 18:36:27.549788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.348 [2024-10-08 18:36:27.549860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.348 [2024-10-08 18:36:27.549875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.348 [2024-10-08 18:36:27.549881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.348 [2024-10-08 18:36:27.549887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.348 [2024-10-08 18:36:27.549901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-10-08 18:36:27.559880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.348 [2024-10-08 18:36:27.559987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.348 [2024-10-08 18:36:27.560000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.348 [2024-10-08 18:36:27.560007] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.348 [2024-10-08 18:36:27.560013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.348 [2024-10-08 18:36:27.560028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-10-08 18:36:27.569843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.348 [2024-10-08 18:36:27.569898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.348 [2024-10-08 18:36:27.569911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.348 [2024-10-08 18:36:27.569918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.348 [2024-10-08 18:36:27.569924] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.348 [2024-10-08 18:36:27.569938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-10-08 18:36:27.579872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.348 [2024-10-08 18:36:27.579925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.348 [2024-10-08 18:36:27.579941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.348 [2024-10-08 18:36:27.579948] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.348 [2024-10-08 18:36:27.579953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.348 [2024-10-08 18:36:27.579968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-10-08 18:36:27.589907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.348 [2024-10-08 18:36:27.589964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.348 [2024-10-08 18:36:27.589978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.348 [2024-10-08 18:36:27.589985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.348 [2024-10-08 18:36:27.589990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.348 [2024-10-08 18:36:27.590005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-10-08 18:36:27.599931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.348 [2024-10-08 18:36:27.599984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.348 [2024-10-08 18:36:27.599997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.348 [2024-10-08 18:36:27.600003] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.348 [2024-10-08 18:36:27.600009] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.348 [2024-10-08 18:36:27.600023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-10-08 18:36:27.609966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.348 [2024-10-08 18:36:27.610023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.348 [2024-10-08 18:36:27.610036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.348 [2024-10-08 18:36:27.610042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.348 [2024-10-08 18:36:27.610048] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.348 [2024-10-08 18:36:27.610063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-10-08 18:36:27.619985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.348 [2024-10-08 18:36:27.620063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.348 [2024-10-08 18:36:27.620077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.348 [2024-10-08 18:36:27.620084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.348 [2024-10-08 18:36:27.620094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.348 [2024-10-08 18:36:27.620108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-10-08 18:36:27.629950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.348 [2024-10-08 18:36:27.630014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.348 [2024-10-08 18:36:27.630028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.348 [2024-10-08 18:36:27.630035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.348 [2024-10-08 18:36:27.630041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.348 [2024-10-08 18:36:27.630056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-10-08 18:36:27.640045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.348 [2024-10-08 18:36:27.640101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.348 [2024-10-08 18:36:27.640114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.348 [2024-10-08 18:36:27.640122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.348 [2024-10-08 18:36:27.640128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.348 [2024-10-08 18:36:27.640142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.349 [2024-10-08 18:36:27.650041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.349 [2024-10-08 18:36:27.650134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.349 [2024-10-08 18:36:27.650149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.349 [2024-10-08 18:36:27.650156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.349 [2024-10-08 18:36:27.650163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.349 [2024-10-08 18:36:27.650177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-10-08 18:36:27.660125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.349 [2024-10-08 18:36:27.660180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.349 [2024-10-08 18:36:27.660193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.349 [2024-10-08 18:36:27.660200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.349 [2024-10-08 18:36:27.660206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.349 [2024-10-08 18:36:27.660221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.608 [2024-10-08 18:36:27.670129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.608 [2024-10-08 18:36:27.670190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.608 [2024-10-08 18:36:27.670203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.608 [2024-10-08 18:36:27.670210] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.608 [2024-10-08 18:36:27.670216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.608 [2024-10-08 18:36:27.670230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.608 qpair failed and we were unable to recover it. 00:28:34.608 [2024-10-08 18:36:27.680197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.608 [2024-10-08 18:36:27.680250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.608 [2024-10-08 18:36:27.680264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.608 [2024-10-08 18:36:27.680271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.608 [2024-10-08 18:36:27.680277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.608 [2024-10-08 18:36:27.680292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.608 qpair failed and we were unable to recover it. 00:28:34.608 [2024-10-08 18:36:27.690160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.608 [2024-10-08 18:36:27.690217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.608 [2024-10-08 18:36:27.690230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.608 [2024-10-08 18:36:27.690238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.608 [2024-10-08 18:36:27.690243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.608 [2024-10-08 18:36:27.690258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.608 qpair failed and we were unable to recover it. 00:28:34.608 [2024-10-08 18:36:27.700242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.608 [2024-10-08 18:36:27.700299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.608 [2024-10-08 18:36:27.700312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.608 [2024-10-08 18:36:27.700318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.608 [2024-10-08 18:36:27.700324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.608 [2024-10-08 18:36:27.700338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.608 qpair failed and we were unable to recover it. 00:28:34.608 [2024-10-08 18:36:27.710225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.608 [2024-10-08 18:36:27.710290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.608 [2024-10-08 18:36:27.710306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.609 [2024-10-08 18:36:27.710317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.609 [2024-10-08 18:36:27.710332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.609 [2024-10-08 18:36:27.710351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.609 qpair failed and we were unable to recover it. 00:28:34.609 [2024-10-08 18:36:27.720310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.609 [2024-10-08 18:36:27.720368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.609 [2024-10-08 18:36:27.720389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.609 [2024-10-08 18:36:27.720397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.609 [2024-10-08 18:36:27.720404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.609 [2024-10-08 18:36:27.720420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.609 qpair failed and we were unable to recover it. 00:28:34.609 [2024-10-08 18:36:27.730317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.609 [2024-10-08 18:36:27.730378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.609 [2024-10-08 18:36:27.730393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.609 [2024-10-08 18:36:27.730400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.609 [2024-10-08 18:36:27.730406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.609 [2024-10-08 18:36:27.730421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.609 qpair failed and we were unable to recover it. 00:28:34.609 [2024-10-08 18:36:27.740242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.609 [2024-10-08 18:36:27.740295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.609 [2024-10-08 18:36:27.740309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.609 [2024-10-08 18:36:27.740316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.609 [2024-10-08 18:36:27.740322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.609 [2024-10-08 18:36:27.740336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.609 qpair failed and we were unable to recover it. 00:28:34.609 [2024-10-08 18:36:27.750343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.609 [2024-10-08 18:36:27.750419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.609 [2024-10-08 18:36:27.750433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.609 [2024-10-08 18:36:27.750440] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.609 [2024-10-08 18:36:27.750446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.609 [2024-10-08 18:36:27.750461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.609 qpair failed and we were unable to recover it. 00:28:34.609 [2024-10-08 18:36:27.760389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.609 [2024-10-08 18:36:27.760468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.609 [2024-10-08 18:36:27.760481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.609 [2024-10-08 18:36:27.760488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.609 [2024-10-08 18:36:27.760494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.609 [2024-10-08 18:36:27.760508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.609 qpair failed and we were unable to recover it. 00:28:34.609 [2024-10-08 18:36:27.770405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.609 [2024-10-08 18:36:27.770457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.609 [2024-10-08 18:36:27.770470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.609 [2024-10-08 18:36:27.770477] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.609 [2024-10-08 18:36:27.770483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.609 [2024-10-08 18:36:27.770497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.609 qpair failed and we were unable to recover it. 00:28:34.609 [2024-10-08 18:36:27.780421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.609 [2024-10-08 18:36:27.780472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.609 [2024-10-08 18:36:27.780485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.609 [2024-10-08 18:36:27.780492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.609 [2024-10-08 18:36:27.780498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.609 [2024-10-08 18:36:27.780512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.609 qpair failed and we were unable to recover it. 00:28:34.609 [2024-10-08 18:36:27.790390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.609 [2024-10-08 18:36:27.790485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.609 [2024-10-08 18:36:27.790500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.609 [2024-10-08 18:36:27.790507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.609 [2024-10-08 18:36:27.790513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.609 [2024-10-08 18:36:27.790528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.609 qpair failed and we were unable to recover it. 00:28:34.609 [2024-10-08 18:36:27.800496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.609 [2024-10-08 18:36:27.800556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.609 [2024-10-08 18:36:27.800570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.609 [2024-10-08 18:36:27.800579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.609 [2024-10-08 18:36:27.800585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.609 [2024-10-08 18:36:27.800600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.609 qpair failed and we were unable to recover it. 00:28:34.609 [2024-10-08 18:36:27.810503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.609 [2024-10-08 18:36:27.810561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.609 [2024-10-08 18:36:27.810575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.609 [2024-10-08 18:36:27.810581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.609 [2024-10-08 18:36:27.810588] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.609 [2024-10-08 18:36:27.810603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.609 qpair failed and we were unable to recover it. 00:28:34.609 [2024-10-08 18:36:27.820522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.609 [2024-10-08 18:36:27.820574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.609 [2024-10-08 18:36:27.820587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.609 [2024-10-08 18:36:27.820594] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.609 [2024-10-08 18:36:27.820599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.609 [2024-10-08 18:36:27.820614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.609 qpair failed and we were unable to recover it. 00:28:34.609 [2024-10-08 18:36:27.830567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.609 [2024-10-08 18:36:27.830623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.609 [2024-10-08 18:36:27.830636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.609 [2024-10-08 18:36:27.830643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.609 [2024-10-08 18:36:27.830649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.609 [2024-10-08 18:36:27.830662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.609 qpair failed and we were unable to recover it. 00:28:34.609 [2024-10-08 18:36:27.840575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.609 [2024-10-08 18:36:27.840632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.609 [2024-10-08 18:36:27.840647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.609 [2024-10-08 18:36:27.840654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.609 [2024-10-08 18:36:27.840661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.609 [2024-10-08 18:36:27.840676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.609 qpair failed and we were unable to recover it. 00:28:34.609 [2024-10-08 18:36:27.850636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.609 [2024-10-08 18:36:27.850695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.609 [2024-10-08 18:36:27.850709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.609 [2024-10-08 18:36:27.850716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.610 [2024-10-08 18:36:27.850722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.610 [2024-10-08 18:36:27.850737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.610 qpair failed and we were unable to recover it. 00:28:34.610 [2024-10-08 18:36:27.860656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.610 [2024-10-08 18:36:27.860711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.610 [2024-10-08 18:36:27.860724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.610 [2024-10-08 18:36:27.860730] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.610 [2024-10-08 18:36:27.860736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.610 [2024-10-08 18:36:27.860750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.610 qpair failed and we were unable to recover it. 00:28:34.610 [2024-10-08 18:36:27.870682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.610 [2024-10-08 18:36:27.870736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.610 [2024-10-08 18:36:27.870750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.610 [2024-10-08 18:36:27.870756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.610 [2024-10-08 18:36:27.870762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.610 [2024-10-08 18:36:27.870776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.610 qpair failed and we were unable to recover it. 00:28:34.610 [2024-10-08 18:36:27.880701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.610 [2024-10-08 18:36:27.880755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.610 [2024-10-08 18:36:27.880768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.610 [2024-10-08 18:36:27.880774] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.610 [2024-10-08 18:36:27.880780] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.610 [2024-10-08 18:36:27.880795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.610 qpair failed and we were unable to recover it. 00:28:34.610 [2024-10-08 18:36:27.890719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.610 [2024-10-08 18:36:27.890770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.610 [2024-10-08 18:36:27.890784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.610 [2024-10-08 18:36:27.890793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.610 [2024-10-08 18:36:27.890799] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.610 [2024-10-08 18:36:27.890814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.610 qpair failed and we were unable to recover it. 00:28:34.610 [2024-10-08 18:36:27.900807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.610 [2024-10-08 18:36:27.900857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.610 [2024-10-08 18:36:27.900870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.610 [2024-10-08 18:36:27.900877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.610 [2024-10-08 18:36:27.900883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.610 [2024-10-08 18:36:27.900896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.610 qpair failed and we were unable to recover it. 00:28:34.610 [2024-10-08 18:36:27.910812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.610 [2024-10-08 18:36:27.910896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.610 [2024-10-08 18:36:27.910910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.610 [2024-10-08 18:36:27.910917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.610 [2024-10-08 18:36:27.910923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.610 [2024-10-08 18:36:27.910937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.610 qpair failed and we were unable to recover it. 00:28:34.610 [2024-10-08 18:36:27.920859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.610 [2024-10-08 18:36:27.920917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.610 [2024-10-08 18:36:27.920930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.610 [2024-10-08 18:36:27.920937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.610 [2024-10-08 18:36:27.920942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.610 [2024-10-08 18:36:27.920957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.610 qpair failed and we were unable to recover it. 00:28:34.870 [2024-10-08 18:36:27.930893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.870 [2024-10-08 18:36:27.930952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.870 [2024-10-08 18:36:27.930966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.870 [2024-10-08 18:36:27.930973] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.870 [2024-10-08 18:36:27.930980] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.870 [2024-10-08 18:36:27.930994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.870 qpair failed and we were unable to recover it. 00:28:34.870 [2024-10-08 18:36:27.940890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.870 [2024-10-08 18:36:27.940941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.870 [2024-10-08 18:36:27.940955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.870 [2024-10-08 18:36:27.940962] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.870 [2024-10-08 18:36:27.940967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.870 [2024-10-08 18:36:27.940982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.870 qpair failed and we were unable to recover it. 00:28:34.870 [2024-10-08 18:36:27.950922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.870 [2024-10-08 18:36:27.950996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.870 [2024-10-08 18:36:27.951013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.870 [2024-10-08 18:36:27.951020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.870 [2024-10-08 18:36:27.951026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.870 [2024-10-08 18:36:27.951040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.870 qpair failed and we were unable to recover it. 00:28:34.870 [2024-10-08 18:36:27.960939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.870 [2024-10-08 18:36:27.960996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.870 [2024-10-08 18:36:27.961012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.870 [2024-10-08 18:36:27.961022] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.870 [2024-10-08 18:36:27.961031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.870 [2024-10-08 18:36:27.961048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.870 qpair failed and we were unable to recover it. 00:28:34.870 [2024-10-08 18:36:27.970989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.870 [2024-10-08 18:36:27.971053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.870 [2024-10-08 18:36:27.971068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.870 [2024-10-08 18:36:27.971075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.870 [2024-10-08 18:36:27.971081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.870 [2024-10-08 18:36:27.971096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.870 qpair failed and we were unable to recover it. 00:28:34.870 [2024-10-08 18:36:27.981005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.870 [2024-10-08 18:36:27.981058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.870 [2024-10-08 18:36:27.981076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.870 [2024-10-08 18:36:27.981082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.870 [2024-10-08 18:36:27.981088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.870 [2024-10-08 18:36:27.981103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.870 qpair failed and we were unable to recover it. 00:28:34.870 [2024-10-08 18:36:27.990983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.870 [2024-10-08 18:36:27.991059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.870 [2024-10-08 18:36:27.991075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.870 [2024-10-08 18:36:27.991082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.870 [2024-10-08 18:36:27.991088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.870 [2024-10-08 18:36:27.991103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.870 qpair failed and we were unable to recover it. 00:28:34.870 [2024-10-08 18:36:28.001099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.870 [2024-10-08 18:36:28.001164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.870 [2024-10-08 18:36:28.001177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.870 [2024-10-08 18:36:28.001184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.870 [2024-10-08 18:36:28.001190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.870 [2024-10-08 18:36:28.001204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.870 qpair failed and we were unable to recover it. 00:28:34.870 [2024-10-08 18:36:28.011076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.870 [2024-10-08 18:36:28.011125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.870 [2024-10-08 18:36:28.011138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.870 [2024-10-08 18:36:28.011145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.870 [2024-10-08 18:36:28.011151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.870 [2024-10-08 18:36:28.011166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.870 qpair failed and we were unable to recover it. 00:28:34.870 [2024-10-08 18:36:28.021108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.870 [2024-10-08 18:36:28.021161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.870 [2024-10-08 18:36:28.021174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.870 [2024-10-08 18:36:28.021181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.870 [2024-10-08 18:36:28.021187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.870 [2024-10-08 18:36:28.021205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.870 qpair failed and we were unable to recover it. 00:28:34.870 [2024-10-08 18:36:28.031145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.871 [2024-10-08 18:36:28.031201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.871 [2024-10-08 18:36:28.031215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.871 [2024-10-08 18:36:28.031221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.871 [2024-10-08 18:36:28.031227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.871 [2024-10-08 18:36:28.031241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.871 qpair failed and we were unable to recover it. 00:28:34.871 [2024-10-08 18:36:28.041171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.871 [2024-10-08 18:36:28.041228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.871 [2024-10-08 18:36:28.041242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.871 [2024-10-08 18:36:28.041248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.871 [2024-10-08 18:36:28.041254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.871 [2024-10-08 18:36:28.041269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.871 qpair failed and we were unable to recover it. 00:28:34.871 [2024-10-08 18:36:28.051187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.871 [2024-10-08 18:36:28.051242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.871 [2024-10-08 18:36:28.051256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.871 [2024-10-08 18:36:28.051263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.871 [2024-10-08 18:36:28.051269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.871 [2024-10-08 18:36:28.051283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.871 qpair failed and we were unable to recover it. 00:28:34.871 [2024-10-08 18:36:28.061215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.871 [2024-10-08 18:36:28.061269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.871 [2024-10-08 18:36:28.061282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.871 [2024-10-08 18:36:28.061289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.871 [2024-10-08 18:36:28.061295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.871 [2024-10-08 18:36:28.061310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.871 qpair failed and we were unable to recover it. 00:28:34.871 [2024-10-08 18:36:28.071253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.871 [2024-10-08 18:36:28.071312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.871 [2024-10-08 18:36:28.071328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.871 [2024-10-08 18:36:28.071335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.871 [2024-10-08 18:36:28.071341] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.871 [2024-10-08 18:36:28.071355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.871 qpair failed and we were unable to recover it. 00:28:34.871 [2024-10-08 18:36:28.081330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.871 [2024-10-08 18:36:28.081434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.871 [2024-10-08 18:36:28.081447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.871 [2024-10-08 18:36:28.081454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.871 [2024-10-08 18:36:28.081460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.871 [2024-10-08 18:36:28.081475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.871 qpair failed and we were unable to recover it. 00:28:34.871 [2024-10-08 18:36:28.091306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.871 [2024-10-08 18:36:28.091358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.871 [2024-10-08 18:36:28.091371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.871 [2024-10-08 18:36:28.091383] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.871 [2024-10-08 18:36:28.091389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.871 [2024-10-08 18:36:28.091403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.871 qpair failed and we were unable to recover it. 00:28:34.871 [2024-10-08 18:36:28.101358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.871 [2024-10-08 18:36:28.101435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.871 [2024-10-08 18:36:28.101448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.871 [2024-10-08 18:36:28.101455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.871 [2024-10-08 18:36:28.101461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.871 [2024-10-08 18:36:28.101475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.871 qpair failed and we were unable to recover it. 00:28:34.871 [2024-10-08 18:36:28.111369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.871 [2024-10-08 18:36:28.111431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.871 [2024-10-08 18:36:28.111444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.871 [2024-10-08 18:36:28.111451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.871 [2024-10-08 18:36:28.111457] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.871 [2024-10-08 18:36:28.111475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.871 qpair failed and we were unable to recover it. 00:28:34.871 [2024-10-08 18:36:28.121401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.871 [2024-10-08 18:36:28.121480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.871 [2024-10-08 18:36:28.121494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.871 [2024-10-08 18:36:28.121501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.871 [2024-10-08 18:36:28.121507] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.871 [2024-10-08 18:36:28.121522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.871 qpair failed and we were unable to recover it. 00:28:34.871 [2024-10-08 18:36:28.131467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.871 [2024-10-08 18:36:28.131527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.871 [2024-10-08 18:36:28.131540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.871 [2024-10-08 18:36:28.131547] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.871 [2024-10-08 18:36:28.131553] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.871 [2024-10-08 18:36:28.131568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.871 qpair failed and we were unable to recover it. 00:28:34.871 [2024-10-08 18:36:28.141380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.871 [2024-10-08 18:36:28.141442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.871 [2024-10-08 18:36:28.141455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.871 [2024-10-08 18:36:28.141462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.871 [2024-10-08 18:36:28.141468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.871 [2024-10-08 18:36:28.141483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.871 qpair failed and we were unable to recover it. 00:28:34.871 [2024-10-08 18:36:28.151484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.871 [2024-10-08 18:36:28.151580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.871 [2024-10-08 18:36:28.151594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.871 [2024-10-08 18:36:28.151601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.871 [2024-10-08 18:36:28.151607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.871 [2024-10-08 18:36:28.151622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.871 qpair failed and we were unable to recover it. 00:28:34.871 [2024-10-08 18:36:28.161494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.871 [2024-10-08 18:36:28.161566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.871 [2024-10-08 18:36:28.161579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.871 [2024-10-08 18:36:28.161586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.871 [2024-10-08 18:36:28.161592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.872 [2024-10-08 18:36:28.161607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.872 qpair failed and we were unable to recover it. 00:28:34.872 [2024-10-08 18:36:28.171591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.872 [2024-10-08 18:36:28.171642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.872 [2024-10-08 18:36:28.171655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.872 [2024-10-08 18:36:28.171662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.872 [2024-10-08 18:36:28.171668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.872 [2024-10-08 18:36:28.171683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.872 qpair failed and we were unable to recover it. 00:28:34.872 [2024-10-08 18:36:28.181574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.872 [2024-10-08 18:36:28.181654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.872 [2024-10-08 18:36:28.181667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.872 [2024-10-08 18:36:28.181674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.872 [2024-10-08 18:36:28.181680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:34.872 [2024-10-08 18:36:28.181695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.872 qpair failed and we were unable to recover it. 00:28:35.131 [2024-10-08 18:36:28.191617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.131 [2024-10-08 18:36:28.191697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.131 [2024-10-08 18:36:28.191712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.131 [2024-10-08 18:36:28.191720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.131 [2024-10-08 18:36:28.191726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.131 [2024-10-08 18:36:28.191741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.131 qpair failed and we were unable to recover it. 00:28:35.131 [2024-10-08 18:36:28.201641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.131 [2024-10-08 18:36:28.201699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.132 [2024-10-08 18:36:28.201712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.132 [2024-10-08 18:36:28.201719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.132 [2024-10-08 18:36:28.201729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.132 [2024-10-08 18:36:28.201743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.132 qpair failed and we were unable to recover it. 00:28:35.132 [2024-10-08 18:36:28.211649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.132 [2024-10-08 18:36:28.211702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.132 [2024-10-08 18:36:28.211719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.132 [2024-10-08 18:36:28.211729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.132 [2024-10-08 18:36:28.211738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.132 [2024-10-08 18:36:28.211754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.132 qpair failed and we were unable to recover it. 00:28:35.132 [2024-10-08 18:36:28.221704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.132 [2024-10-08 18:36:28.221762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.132 [2024-10-08 18:36:28.221777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.132 [2024-10-08 18:36:28.221784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.132 [2024-10-08 18:36:28.221790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.132 [2024-10-08 18:36:28.221805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.132 qpair failed and we were unable to recover it. 00:28:35.132 [2024-10-08 18:36:28.231741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.132 [2024-10-08 18:36:28.231799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.132 [2024-10-08 18:36:28.231813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.132 [2024-10-08 18:36:28.231820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.132 [2024-10-08 18:36:28.231826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.132 [2024-10-08 18:36:28.231840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.132 qpair failed and we were unable to recover it. 00:28:35.132 [2024-10-08 18:36:28.241785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.132 [2024-10-08 18:36:28.241861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.132 [2024-10-08 18:36:28.241876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.132 [2024-10-08 18:36:28.241883] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.132 [2024-10-08 18:36:28.241889] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.132 [2024-10-08 18:36:28.241904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.132 qpair failed and we were unable to recover it. 00:28:35.132 [2024-10-08 18:36:28.251773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.132 [2024-10-08 18:36:28.251837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.132 [2024-10-08 18:36:28.251853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.132 [2024-10-08 18:36:28.251860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.132 [2024-10-08 18:36:28.251866] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.132 [2024-10-08 18:36:28.251880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.132 qpair failed and we were unable to recover it. 00:28:35.132 [2024-10-08 18:36:28.261783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.132 [2024-10-08 18:36:28.261839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.132 [2024-10-08 18:36:28.261852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.132 [2024-10-08 18:36:28.261859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.132 [2024-10-08 18:36:28.261865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.132 [2024-10-08 18:36:28.261879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.132 qpair failed and we were unable to recover it. 00:28:35.132 [2024-10-08 18:36:28.271827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.132 [2024-10-08 18:36:28.271883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.132 [2024-10-08 18:36:28.271896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.132 [2024-10-08 18:36:28.271903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.132 [2024-10-08 18:36:28.271909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.132 [2024-10-08 18:36:28.271923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.132 qpair failed and we were unable to recover it. 00:28:35.132 [2024-10-08 18:36:28.281843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.132 [2024-10-08 18:36:28.281944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.132 [2024-10-08 18:36:28.281958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.132 [2024-10-08 18:36:28.281965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.132 [2024-10-08 18:36:28.281971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.132 [2024-10-08 18:36:28.281986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.132 qpair failed and we were unable to recover it. 00:28:35.132 [2024-10-08 18:36:28.291851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.132 [2024-10-08 18:36:28.291906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.132 [2024-10-08 18:36:28.291920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.132 [2024-10-08 18:36:28.291930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.132 [2024-10-08 18:36:28.291936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.132 [2024-10-08 18:36:28.291951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.132 qpair failed and we were unable to recover it. 00:28:35.132 [2024-10-08 18:36:28.301891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.132 [2024-10-08 18:36:28.301943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.132 [2024-10-08 18:36:28.301956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.132 [2024-10-08 18:36:28.301963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.132 [2024-10-08 18:36:28.301969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.132 [2024-10-08 18:36:28.301983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.132 qpair failed and we were unable to recover it. 00:28:35.133 [2024-10-08 18:36:28.311937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.133 [2024-10-08 18:36:28.311992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.133 [2024-10-08 18:36:28.312005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.133 [2024-10-08 18:36:28.312012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.133 [2024-10-08 18:36:28.312018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.133 [2024-10-08 18:36:28.312032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.133 qpair failed and we were unable to recover it. 00:28:35.133 [2024-10-08 18:36:28.321966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.133 [2024-10-08 18:36:28.322024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.133 [2024-10-08 18:36:28.322039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.133 [2024-10-08 18:36:28.322046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.133 [2024-10-08 18:36:28.322051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.133 [2024-10-08 18:36:28.322067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.133 qpair failed and we were unable to recover it. 00:28:35.133 [2024-10-08 18:36:28.331999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.133 [2024-10-08 18:36:28.332052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.133 [2024-10-08 18:36:28.332065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.133 [2024-10-08 18:36:28.332072] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.133 [2024-10-08 18:36:28.332077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.133 [2024-10-08 18:36:28.332091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.133 qpair failed and we were unable to recover it. 00:28:35.133 [2024-10-08 18:36:28.342010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.133 [2024-10-08 18:36:28.342061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.133 [2024-10-08 18:36:28.342075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.133 [2024-10-08 18:36:28.342082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.133 [2024-10-08 18:36:28.342088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.133 [2024-10-08 18:36:28.342102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.133 qpair failed and we were unable to recover it. 00:28:35.133 [2024-10-08 18:36:28.352054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.133 [2024-10-08 18:36:28.352236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.133 [2024-10-08 18:36:28.352253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.133 [2024-10-08 18:36:28.352260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.133 [2024-10-08 18:36:28.352266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.133 [2024-10-08 18:36:28.352280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.133 qpair failed and we were unable to recover it. 00:28:35.133 [2024-10-08 18:36:28.362119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.133 [2024-10-08 18:36:28.362199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.133 [2024-10-08 18:36:28.362214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.133 [2024-10-08 18:36:28.362221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.133 [2024-10-08 18:36:28.362227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.133 [2024-10-08 18:36:28.362241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.133 qpair failed and we were unable to recover it. 00:28:35.133 [2024-10-08 18:36:28.372081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.133 [2024-10-08 18:36:28.372135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.133 [2024-10-08 18:36:28.372149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.133 [2024-10-08 18:36:28.372155] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.133 [2024-10-08 18:36:28.372161] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.133 [2024-10-08 18:36:28.372175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.133 qpair failed and we were unable to recover it. 00:28:35.133 [2024-10-08 18:36:28.382118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.133 [2024-10-08 18:36:28.382202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.133 [2024-10-08 18:36:28.382217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.133 [2024-10-08 18:36:28.382227] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.133 [2024-10-08 18:36:28.382233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.133 [2024-10-08 18:36:28.382247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.133 qpair failed and we were unable to recover it. 00:28:35.133 [2024-10-08 18:36:28.392084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.133 [2024-10-08 18:36:28.392141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.133 [2024-10-08 18:36:28.392156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.133 [2024-10-08 18:36:28.392163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.133 [2024-10-08 18:36:28.392168] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.133 [2024-10-08 18:36:28.392183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.133 qpair failed and we were unable to recover it. 00:28:35.133 [2024-10-08 18:36:28.402184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.133 [2024-10-08 18:36:28.402258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.133 [2024-10-08 18:36:28.402272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.133 [2024-10-08 18:36:28.402278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.133 [2024-10-08 18:36:28.402284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.133 [2024-10-08 18:36:28.402299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.133 qpair failed and we were unable to recover it. 00:28:35.133 [2024-10-08 18:36:28.412203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.133 [2024-10-08 18:36:28.412258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.133 [2024-10-08 18:36:28.412272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.133 [2024-10-08 18:36:28.412278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.134 [2024-10-08 18:36:28.412284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.134 [2024-10-08 18:36:28.412299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.134 qpair failed and we were unable to recover it. 00:28:35.134 [2024-10-08 18:36:28.422247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.134 [2024-10-08 18:36:28.422313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.134 [2024-10-08 18:36:28.422327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.134 [2024-10-08 18:36:28.422333] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.134 [2024-10-08 18:36:28.422339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.134 [2024-10-08 18:36:28.422354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.134 qpair failed and we were unable to recover it. 00:28:35.134 [2024-10-08 18:36:28.432292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.134 [2024-10-08 18:36:28.432346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.134 [2024-10-08 18:36:28.432360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.134 [2024-10-08 18:36:28.432366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.134 [2024-10-08 18:36:28.432372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.134 [2024-10-08 18:36:28.432392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.134 qpair failed and we were unable to recover it. 00:28:35.134 [2024-10-08 18:36:28.442346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.134 [2024-10-08 18:36:28.442454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.134 [2024-10-08 18:36:28.442468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.134 [2024-10-08 18:36:28.442475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.134 [2024-10-08 18:36:28.442481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.134 [2024-10-08 18:36:28.442496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.134 qpair failed and we were unable to recover it. 00:28:35.134 [2024-10-08 18:36:28.452339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.134 [2024-10-08 18:36:28.452423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.134 [2024-10-08 18:36:28.452438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.134 [2024-10-08 18:36:28.452445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.134 [2024-10-08 18:36:28.452451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.134 [2024-10-08 18:36:28.452466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.134 qpair failed and we were unable to recover it. 00:28:35.394 [2024-10-08 18:36:28.462336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.394 [2024-10-08 18:36:28.462397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.394 [2024-10-08 18:36:28.462415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.394 [2024-10-08 18:36:28.462426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.394 [2024-10-08 18:36:28.462435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.394 [2024-10-08 18:36:28.462455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.394 qpair failed and we were unable to recover it. 00:28:35.394 [2024-10-08 18:36:28.472439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.394 [2024-10-08 18:36:28.472496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.394 [2024-10-08 18:36:28.472518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.394 [2024-10-08 18:36:28.472525] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.394 [2024-10-08 18:36:28.472531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.394 [2024-10-08 18:36:28.472546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.394 qpair failed and we were unable to recover it. 00:28:35.394 [2024-10-08 18:36:28.482409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.394 [2024-10-08 18:36:28.482472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.394 [2024-10-08 18:36:28.482486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.394 [2024-10-08 18:36:28.482493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.394 [2024-10-08 18:36:28.482499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.395 [2024-10-08 18:36:28.482515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.395 qpair failed and we were unable to recover it. 00:28:35.395 [2024-10-08 18:36:28.492482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.395 [2024-10-08 18:36:28.492534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.395 [2024-10-08 18:36:28.492549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.395 [2024-10-08 18:36:28.492556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.395 [2024-10-08 18:36:28.492562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.395 [2024-10-08 18:36:28.492577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.395 qpair failed and we were unable to recover it. 00:28:35.395 [2024-10-08 18:36:28.502412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.395 [2024-10-08 18:36:28.502486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.395 [2024-10-08 18:36:28.502499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.395 [2024-10-08 18:36:28.502506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.395 [2024-10-08 18:36:28.502512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.395 [2024-10-08 18:36:28.502527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.395 qpair failed and we were unable to recover it. 00:28:35.395 [2024-10-08 18:36:28.512477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.395 [2024-10-08 18:36:28.512538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.395 [2024-10-08 18:36:28.512551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.395 [2024-10-08 18:36:28.512558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.395 [2024-10-08 18:36:28.512564] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.395 [2024-10-08 18:36:28.512580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.395 qpair failed and we were unable to recover it. 00:28:35.395 [2024-10-08 18:36:28.522519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.395 [2024-10-08 18:36:28.522574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.395 [2024-10-08 18:36:28.522588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.395 [2024-10-08 18:36:28.522595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.395 [2024-10-08 18:36:28.522601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.395 [2024-10-08 18:36:28.522616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.395 qpair failed and we were unable to recover it. 00:28:35.395 [2024-10-08 18:36:28.532510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.395 [2024-10-08 18:36:28.532592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.395 [2024-10-08 18:36:28.532607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.395 [2024-10-08 18:36:28.532615] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.395 [2024-10-08 18:36:28.532621] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.395 [2024-10-08 18:36:28.532636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.395 qpair failed and we were unable to recover it. 00:28:35.395 [2024-10-08 18:36:28.542522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.395 [2024-10-08 18:36:28.542597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.395 [2024-10-08 18:36:28.542612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.395 [2024-10-08 18:36:28.542619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.395 [2024-10-08 18:36:28.542625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.395 [2024-10-08 18:36:28.542640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.395 qpair failed and we were unable to recover it. 00:28:35.395 [2024-10-08 18:36:28.552603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.395 [2024-10-08 18:36:28.552662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.395 [2024-10-08 18:36:28.552676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.395 [2024-10-08 18:36:28.552683] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.395 [2024-10-08 18:36:28.552689] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.395 [2024-10-08 18:36:28.552704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.395 qpair failed and we were unable to recover it. 00:28:35.395 [2024-10-08 18:36:28.562657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.395 [2024-10-08 18:36:28.562727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.395 [2024-10-08 18:36:28.562748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.395 [2024-10-08 18:36:28.562754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.395 [2024-10-08 18:36:28.562760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.395 [2024-10-08 18:36:28.562775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.395 qpair failed and we were unable to recover it. 00:28:35.395 [2024-10-08 18:36:28.572650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.395 [2024-10-08 18:36:28.572702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.395 [2024-10-08 18:36:28.572716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.395 [2024-10-08 18:36:28.572722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.395 [2024-10-08 18:36:28.572728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.395 [2024-10-08 18:36:28.572742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.395 qpair failed and we were unable to recover it. 00:28:35.395 [2024-10-08 18:36:28.582696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.395 [2024-10-08 18:36:28.582752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.395 [2024-10-08 18:36:28.582764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.395 [2024-10-08 18:36:28.582771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.395 [2024-10-08 18:36:28.582777] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.395 [2024-10-08 18:36:28.582792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.395 qpair failed and we were unable to recover it. 00:28:35.395 [2024-10-08 18:36:28.592759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.395 [2024-10-08 18:36:28.592837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.395 [2024-10-08 18:36:28.592851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.395 [2024-10-08 18:36:28.592857] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.395 [2024-10-08 18:36:28.592865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.395 [2024-10-08 18:36:28.592879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.395 qpair failed and we were unable to recover it. 00:28:35.395 [2024-10-08 18:36:28.602750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.395 [2024-10-08 18:36:28.602838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.395 [2024-10-08 18:36:28.602853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.395 [2024-10-08 18:36:28.602860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.395 [2024-10-08 18:36:28.602866] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.395 [2024-10-08 18:36:28.602883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.395 qpair failed and we were unable to recover it. 00:28:35.395 [2024-10-08 18:36:28.612784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.395 [2024-10-08 18:36:28.612834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.395 [2024-10-08 18:36:28.612847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.395 [2024-10-08 18:36:28.612854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.395 [2024-10-08 18:36:28.612860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.395 [2024-10-08 18:36:28.612874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.395 qpair failed and we were unable to recover it. 00:28:35.395 [2024-10-08 18:36:28.622801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.395 [2024-10-08 18:36:28.622853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.395 [2024-10-08 18:36:28.622866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.395 [2024-10-08 18:36:28.622872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.395 [2024-10-08 18:36:28.622878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.395 [2024-10-08 18:36:28.622892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.395 qpair failed and we were unable to recover it. 00:28:35.395 [2024-10-08 18:36:28.632784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.396 [2024-10-08 18:36:28.632837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.396 [2024-10-08 18:36:28.632851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.396 [2024-10-08 18:36:28.632858] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.396 [2024-10-08 18:36:28.632863] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.396 [2024-10-08 18:36:28.632877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.396 qpair failed and we were unable to recover it. 00:28:35.396 [2024-10-08 18:36:28.642869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.396 [2024-10-08 18:36:28.642935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.396 [2024-10-08 18:36:28.642949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.396 [2024-10-08 18:36:28.642955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.396 [2024-10-08 18:36:28.642961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.396 [2024-10-08 18:36:28.642975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.396 qpair failed and we were unable to recover it. 00:28:35.396 [2024-10-08 18:36:28.652873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.396 [2024-10-08 18:36:28.652954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.396 [2024-10-08 18:36:28.652972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.396 [2024-10-08 18:36:28.652979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.396 [2024-10-08 18:36:28.652985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.396 [2024-10-08 18:36:28.653000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.396 qpair failed and we were unable to recover it. 00:28:35.396 [2024-10-08 18:36:28.662938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.396 [2024-10-08 18:36:28.662987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.396 [2024-10-08 18:36:28.663000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.396 [2024-10-08 18:36:28.663007] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.396 [2024-10-08 18:36:28.663013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.396 [2024-10-08 18:36:28.663027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.396 qpair failed and we were unable to recover it. 00:28:35.396 [2024-10-08 18:36:28.672951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.396 [2024-10-08 18:36:28.673011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.396 [2024-10-08 18:36:28.673024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.396 [2024-10-08 18:36:28.673031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.396 [2024-10-08 18:36:28.673037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.396 [2024-10-08 18:36:28.673051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.396 qpair failed and we were unable to recover it. 00:28:35.396 [2024-10-08 18:36:28.682984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.396 [2024-10-08 18:36:28.683036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.396 [2024-10-08 18:36:28.683049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.396 [2024-10-08 18:36:28.683056] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.396 [2024-10-08 18:36:28.683062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.396 [2024-10-08 18:36:28.683076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.396 qpair failed and we were unable to recover it. 00:28:35.396 [2024-10-08 18:36:28.693034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.396 [2024-10-08 18:36:28.693099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.396 [2024-10-08 18:36:28.693112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.396 [2024-10-08 18:36:28.693119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.396 [2024-10-08 18:36:28.693128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.396 [2024-10-08 18:36:28.693143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.396 qpair failed and we were unable to recover it. 00:28:35.396 [2024-10-08 18:36:28.703093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.396 [2024-10-08 18:36:28.703146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.396 [2024-10-08 18:36:28.703159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.396 [2024-10-08 18:36:28.703166] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.396 [2024-10-08 18:36:28.703171] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.396 [2024-10-08 18:36:28.703186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.396 qpair failed and we were unable to recover it. 00:28:35.396 [2024-10-08 18:36:28.713057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.396 [2024-10-08 18:36:28.713117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.396 [2024-10-08 18:36:28.713134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.396 [2024-10-08 18:36:28.713145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.396 [2024-10-08 18:36:28.713154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.396 [2024-10-08 18:36:28.713170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.396 qpair failed and we were unable to recover it. 00:28:35.656 [2024-10-08 18:36:28.723121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.656 [2024-10-08 18:36:28.723194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.656 [2024-10-08 18:36:28.723209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.656 [2024-10-08 18:36:28.723217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.656 [2024-10-08 18:36:28.723223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.656 [2024-10-08 18:36:28.723239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.656 qpair failed and we were unable to recover it. 00:28:35.656 [2024-10-08 18:36:28.733104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.656 [2024-10-08 18:36:28.733165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.656 [2024-10-08 18:36:28.733179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.656 [2024-10-08 18:36:28.733186] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.656 [2024-10-08 18:36:28.733193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.656 [2024-10-08 18:36:28.733207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.656 qpair failed and we were unable to recover it. 00:28:35.656 [2024-10-08 18:36:28.743097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.656 [2024-10-08 18:36:28.743153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.656 [2024-10-08 18:36:28.743167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.656 [2024-10-08 18:36:28.743174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.656 [2024-10-08 18:36:28.743180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.656 [2024-10-08 18:36:28.743195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.656 qpair failed and we were unable to recover it. 00:28:35.656 [2024-10-08 18:36:28.753189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.656 [2024-10-08 18:36:28.753246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.656 [2024-10-08 18:36:28.753260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.656 [2024-10-08 18:36:28.753266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.656 [2024-10-08 18:36:28.753272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.656 [2024-10-08 18:36:28.753286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.656 qpair failed and we were unable to recover it. 00:28:35.656 [2024-10-08 18:36:28.763183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.656 [2024-10-08 18:36:28.763238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.656 [2024-10-08 18:36:28.763251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.656 [2024-10-08 18:36:28.763258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.656 [2024-10-08 18:36:28.763264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.656 [2024-10-08 18:36:28.763278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.656 qpair failed and we were unable to recover it. 00:28:35.656 [2024-10-08 18:36:28.773231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.656 [2024-10-08 18:36:28.773332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.656 [2024-10-08 18:36:28.773347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.656 [2024-10-08 18:36:28.773354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.656 [2024-10-08 18:36:28.773360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.656 [2024-10-08 18:36:28.773380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.656 qpair failed and we were unable to recover it. 00:28:35.656 [2024-10-08 18:36:28.783304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.656 [2024-10-08 18:36:28.783360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.656 [2024-10-08 18:36:28.783373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.657 [2024-10-08 18:36:28.783392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.657 [2024-10-08 18:36:28.783402] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.657 [2024-10-08 18:36:28.783417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.657 qpair failed and we were unable to recover it. 00:28:35.657 [2024-10-08 18:36:28.793340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.657 [2024-10-08 18:36:28.793410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.657 [2024-10-08 18:36:28.793424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.657 [2024-10-08 18:36:28.793431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.657 [2024-10-08 18:36:28.793436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.657 [2024-10-08 18:36:28.793451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.657 qpair failed and we were unable to recover it. 00:28:35.657 [2024-10-08 18:36:28.803325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.657 [2024-10-08 18:36:28.803386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.657 [2024-10-08 18:36:28.803400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.657 [2024-10-08 18:36:28.803407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.657 [2024-10-08 18:36:28.803413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.657 [2024-10-08 18:36:28.803428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.657 qpair failed and we were unable to recover it. 00:28:35.657 [2024-10-08 18:36:28.813389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.657 [2024-10-08 18:36:28.813444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.657 [2024-10-08 18:36:28.813458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.657 [2024-10-08 18:36:28.813464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.657 [2024-10-08 18:36:28.813470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.657 [2024-10-08 18:36:28.813485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.657 qpair failed and we were unable to recover it. 00:28:35.657 [2024-10-08 18:36:28.823377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.657 [2024-10-08 18:36:28.823432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.657 [2024-10-08 18:36:28.823446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.657 [2024-10-08 18:36:28.823452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.657 [2024-10-08 18:36:28.823458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.657 [2024-10-08 18:36:28.823472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.657 qpair failed and we were unable to recover it. 00:28:35.657 [2024-10-08 18:36:28.833442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.657 [2024-10-08 18:36:28.833525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.657 [2024-10-08 18:36:28.833540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.657 [2024-10-08 18:36:28.833547] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.657 [2024-10-08 18:36:28.833553] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.657 [2024-10-08 18:36:28.833567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.657 qpair failed and we were unable to recover it. 00:28:35.657 [2024-10-08 18:36:28.843447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.657 [2024-10-08 18:36:28.843502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.657 [2024-10-08 18:36:28.843516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.657 [2024-10-08 18:36:28.843523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.657 [2024-10-08 18:36:28.843529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.657 [2024-10-08 18:36:28.843543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.657 qpair failed and we were unable to recover it. 00:28:35.657 [2024-10-08 18:36:28.853470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.657 [2024-10-08 18:36:28.853523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.657 [2024-10-08 18:36:28.853536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.657 [2024-10-08 18:36:28.853543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.657 [2024-10-08 18:36:28.853549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.657 [2024-10-08 18:36:28.853563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.657 qpair failed and we were unable to recover it. 00:28:35.657 [2024-10-08 18:36:28.863493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.657 [2024-10-08 18:36:28.863547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.657 [2024-10-08 18:36:28.863561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.657 [2024-10-08 18:36:28.863567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.657 [2024-10-08 18:36:28.863573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.657 [2024-10-08 18:36:28.863588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.657 qpair failed and we were unable to recover it. 00:28:35.657 [2024-10-08 18:36:28.873542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.657 [2024-10-08 18:36:28.873600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.657 [2024-10-08 18:36:28.873614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.657 [2024-10-08 18:36:28.873624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.657 [2024-10-08 18:36:28.873630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.657 [2024-10-08 18:36:28.873644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.657 qpair failed and we were unable to recover it. 00:28:35.657 [2024-10-08 18:36:28.883564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.657 [2024-10-08 18:36:28.883636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.657 [2024-10-08 18:36:28.883650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.657 [2024-10-08 18:36:28.883657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.657 [2024-10-08 18:36:28.883663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.657 [2024-10-08 18:36:28.883677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.657 qpair failed and we were unable to recover it. 00:28:35.657 [2024-10-08 18:36:28.893586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.657 [2024-10-08 18:36:28.893635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.657 [2024-10-08 18:36:28.893648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.657 [2024-10-08 18:36:28.893655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.657 [2024-10-08 18:36:28.893661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.657 [2024-10-08 18:36:28.893675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.657 qpair failed and we were unable to recover it. 00:28:35.657 [2024-10-08 18:36:28.903660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.657 [2024-10-08 18:36:28.903722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.657 [2024-10-08 18:36:28.903735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.657 [2024-10-08 18:36:28.903742] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.657 [2024-10-08 18:36:28.903748] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.657 [2024-10-08 18:36:28.903762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.657 qpair failed and we were unable to recover it. 00:28:35.657 [2024-10-08 18:36:28.913691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.657 [2024-10-08 18:36:28.913745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.657 [2024-10-08 18:36:28.913758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.657 [2024-10-08 18:36:28.913765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.657 [2024-10-08 18:36:28.913770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.657 [2024-10-08 18:36:28.913784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.657 qpair failed and we were unable to recover it. 00:28:35.657 [2024-10-08 18:36:28.923687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.657 [2024-10-08 18:36:28.923740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.658 [2024-10-08 18:36:28.923753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.658 [2024-10-08 18:36:28.923760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.658 [2024-10-08 18:36:28.923766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.658 [2024-10-08 18:36:28.923780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.658 qpair failed and we were unable to recover it. 00:28:35.658 [2024-10-08 18:36:28.933747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.658 [2024-10-08 18:36:28.933816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.658 [2024-10-08 18:36:28.933830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.658 [2024-10-08 18:36:28.933836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.658 [2024-10-08 18:36:28.933842] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.658 [2024-10-08 18:36:28.933857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.658 qpair failed and we were unable to recover it. 00:28:35.658 [2024-10-08 18:36:28.943727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.658 [2024-10-08 18:36:28.943777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.658 [2024-10-08 18:36:28.943791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.658 [2024-10-08 18:36:28.943797] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.658 [2024-10-08 18:36:28.943803] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.658 [2024-10-08 18:36:28.943818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.658 qpair failed and we were unable to recover it. 00:28:35.658 [2024-10-08 18:36:28.953761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.658 [2024-10-08 18:36:28.953819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.658 [2024-10-08 18:36:28.953832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.658 [2024-10-08 18:36:28.953839] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.658 [2024-10-08 18:36:28.953845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.658 [2024-10-08 18:36:28.953859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.658 qpair failed and we were unable to recover it. 00:28:35.658 [2024-10-08 18:36:28.963787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.658 [2024-10-08 18:36:28.963847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.658 [2024-10-08 18:36:28.963868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.658 [2024-10-08 18:36:28.963878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.658 [2024-10-08 18:36:28.963886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.658 [2024-10-08 18:36:28.963903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.658 qpair failed and we were unable to recover it. 00:28:35.658 [2024-10-08 18:36:28.973850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.658 [2024-10-08 18:36:28.973910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.658 [2024-10-08 18:36:28.973925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.658 [2024-10-08 18:36:28.973932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.658 [2024-10-08 18:36:28.973938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.658 [2024-10-08 18:36:28.973954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.658 qpair failed and we were unable to recover it. 00:28:35.917 [2024-10-08 18:36:28.983871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.918 [2024-10-08 18:36:28.983959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.918 [2024-10-08 18:36:28.983975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.918 [2024-10-08 18:36:28.983982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.918 [2024-10-08 18:36:28.983989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.918 [2024-10-08 18:36:28.984004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.918 qpair failed and we were unable to recover it. 00:28:35.918 [2024-10-08 18:36:28.993887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.918 [2024-10-08 18:36:28.993958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.918 [2024-10-08 18:36:28.993972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.918 [2024-10-08 18:36:28.993979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.918 [2024-10-08 18:36:28.993985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.918 [2024-10-08 18:36:28.994000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.918 qpair failed and we were unable to recover it. 00:28:35.918 [2024-10-08 18:36:29.003930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.918 [2024-10-08 18:36:29.004002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.918 [2024-10-08 18:36:29.004016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.918 [2024-10-08 18:36:29.004023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.918 [2024-10-08 18:36:29.004029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.918 [2024-10-08 18:36:29.004043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.918 qpair failed and we were unable to recover it. 00:28:35.918 [2024-10-08 18:36:29.013977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.918 [2024-10-08 18:36:29.014043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.918 [2024-10-08 18:36:29.014057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.918 [2024-10-08 18:36:29.014063] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.918 [2024-10-08 18:36:29.014069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.918 [2024-10-08 18:36:29.014084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.918 qpair failed and we were unable to recover it. 00:28:35.918 [2024-10-08 18:36:29.023944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.918 [2024-10-08 18:36:29.023995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.918 [2024-10-08 18:36:29.024009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.918 [2024-10-08 18:36:29.024015] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.918 [2024-10-08 18:36:29.024021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.918 [2024-10-08 18:36:29.024036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.918 qpair failed and we were unable to recover it. 00:28:35.918 [2024-10-08 18:36:29.033986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.918 [2024-10-08 18:36:29.034081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.918 [2024-10-08 18:36:29.034095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.918 [2024-10-08 18:36:29.034103] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.918 [2024-10-08 18:36:29.034109] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.918 [2024-10-08 18:36:29.034124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.918 qpair failed and we were unable to recover it. 00:28:35.918 [2024-10-08 18:36:29.044013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.918 [2024-10-08 18:36:29.044066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.918 [2024-10-08 18:36:29.044080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.918 [2024-10-08 18:36:29.044087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.918 [2024-10-08 18:36:29.044093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.918 [2024-10-08 18:36:29.044108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.918 qpair failed and we were unable to recover it. 00:28:35.918 [2024-10-08 18:36:29.054049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.918 [2024-10-08 18:36:29.054106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.918 [2024-10-08 18:36:29.054122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.918 [2024-10-08 18:36:29.054129] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.918 [2024-10-08 18:36:29.054134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.918 [2024-10-08 18:36:29.054149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.918 qpair failed and we were unable to recover it. 00:28:35.918 [2024-10-08 18:36:29.064067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.918 [2024-10-08 18:36:29.064125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.918 [2024-10-08 18:36:29.064139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.918 [2024-10-08 18:36:29.064146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.918 [2024-10-08 18:36:29.064152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.918 [2024-10-08 18:36:29.064167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.918 qpair failed and we were unable to recover it. 00:28:35.918 [2024-10-08 18:36:29.074056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.918 [2024-10-08 18:36:29.074111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.918 [2024-10-08 18:36:29.074124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.918 [2024-10-08 18:36:29.074131] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.918 [2024-10-08 18:36:29.074137] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.918 [2024-10-08 18:36:29.074152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.918 qpair failed and we were unable to recover it. 00:28:35.918 [2024-10-08 18:36:29.084131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.918 [2024-10-08 18:36:29.084189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.918 [2024-10-08 18:36:29.084202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.918 [2024-10-08 18:36:29.084209] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.918 [2024-10-08 18:36:29.084215] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.918 [2024-10-08 18:36:29.084230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.918 qpair failed and we were unable to recover it. 00:28:35.918 [2024-10-08 18:36:29.094147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.918 [2024-10-08 18:36:29.094199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.918 [2024-10-08 18:36:29.094212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.918 [2024-10-08 18:36:29.094219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.918 [2024-10-08 18:36:29.094225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.918 [2024-10-08 18:36:29.094242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.918 qpair failed and we were unable to recover it. 00:28:35.918 [2024-10-08 18:36:29.104182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.918 [2024-10-08 18:36:29.104262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.918 [2024-10-08 18:36:29.104276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.918 [2024-10-08 18:36:29.104284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.918 [2024-10-08 18:36:29.104290] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.918 [2024-10-08 18:36:29.104305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.918 qpair failed and we were unable to recover it. 00:28:35.918 [2024-10-08 18:36:29.114210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.918 [2024-10-08 18:36:29.114267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.918 [2024-10-08 18:36:29.114280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.919 [2024-10-08 18:36:29.114287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.919 [2024-10-08 18:36:29.114293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.919 [2024-10-08 18:36:29.114307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.919 qpair failed and we were unable to recover it. 00:28:35.919 [2024-10-08 18:36:29.124251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.919 [2024-10-08 18:36:29.124309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.919 [2024-10-08 18:36:29.124322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.919 [2024-10-08 18:36:29.124328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.919 [2024-10-08 18:36:29.124334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.919 [2024-10-08 18:36:29.124348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.919 qpair failed and we were unable to recover it. 00:28:35.919 [2024-10-08 18:36:29.134249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.919 [2024-10-08 18:36:29.134300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.919 [2024-10-08 18:36:29.134313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.919 [2024-10-08 18:36:29.134319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.919 [2024-10-08 18:36:29.134325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.919 [2024-10-08 18:36:29.134339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.919 qpair failed and we were unable to recover it. 00:28:35.919 [2024-10-08 18:36:29.144302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.919 [2024-10-08 18:36:29.144368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.919 [2024-10-08 18:36:29.144387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.919 [2024-10-08 18:36:29.144394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.919 [2024-10-08 18:36:29.144400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.919 [2024-10-08 18:36:29.144415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.919 qpair failed and we were unable to recover it. 00:28:35.919 [2024-10-08 18:36:29.154319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.919 [2024-10-08 18:36:29.154374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.919 [2024-10-08 18:36:29.154392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.919 [2024-10-08 18:36:29.154398] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.919 [2024-10-08 18:36:29.154404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.919 [2024-10-08 18:36:29.154418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.919 qpair failed and we were unable to recover it. 00:28:35.919 [2024-10-08 18:36:29.164360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.919 [2024-10-08 18:36:29.164422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.919 [2024-10-08 18:36:29.164435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.919 [2024-10-08 18:36:29.164442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.919 [2024-10-08 18:36:29.164448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.919 [2024-10-08 18:36:29.164463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.919 qpair failed and we were unable to recover it. 00:28:35.919 [2024-10-08 18:36:29.174374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.919 [2024-10-08 18:36:29.174427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.919 [2024-10-08 18:36:29.174440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.919 [2024-10-08 18:36:29.174447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.919 [2024-10-08 18:36:29.174453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.919 [2024-10-08 18:36:29.174467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.919 qpair failed and we were unable to recover it. 00:28:35.919 [2024-10-08 18:36:29.184442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.919 [2024-10-08 18:36:29.184512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.919 [2024-10-08 18:36:29.184525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.919 [2024-10-08 18:36:29.184532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.919 [2024-10-08 18:36:29.184544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.919 [2024-10-08 18:36:29.184559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.919 qpair failed and we were unable to recover it. 00:28:35.919 [2024-10-08 18:36:29.194434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.919 [2024-10-08 18:36:29.194489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.919 [2024-10-08 18:36:29.194502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.919 [2024-10-08 18:36:29.194509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.919 [2024-10-08 18:36:29.194515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.919 [2024-10-08 18:36:29.194528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.919 qpair failed and we were unable to recover it. 00:28:35.919 [2024-10-08 18:36:29.204515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.919 [2024-10-08 18:36:29.204622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.919 [2024-10-08 18:36:29.204637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.919 [2024-10-08 18:36:29.204644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.919 [2024-10-08 18:36:29.204650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.919 [2024-10-08 18:36:29.204665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.919 qpair failed and we were unable to recover it. 00:28:35.919 [2024-10-08 18:36:29.214485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.919 [2024-10-08 18:36:29.214554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.919 [2024-10-08 18:36:29.214571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.919 [2024-10-08 18:36:29.214581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.919 [2024-10-08 18:36:29.214589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.919 [2024-10-08 18:36:29.214606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.919 qpair failed and we were unable to recover it. 00:28:35.919 [2024-10-08 18:36:29.224543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.919 [2024-10-08 18:36:29.224608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.919 [2024-10-08 18:36:29.224623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.919 [2024-10-08 18:36:29.224630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.919 [2024-10-08 18:36:29.224636] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.919 [2024-10-08 18:36:29.224651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.919 qpair failed and we were unable to recover it. 00:28:35.919 [2024-10-08 18:36:29.234578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.919 [2024-10-08 18:36:29.234655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.919 [2024-10-08 18:36:29.234670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.919 [2024-10-08 18:36:29.234677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.919 [2024-10-08 18:36:29.234683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:35.919 [2024-10-08 18:36:29.234698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.919 qpair failed and we were unable to recover it. 00:28:36.180 [2024-10-08 18:36:29.244609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.180 [2024-10-08 18:36:29.244666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.180 [2024-10-08 18:36:29.244681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.180 [2024-10-08 18:36:29.244688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.180 [2024-10-08 18:36:29.244694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.180 [2024-10-08 18:36:29.244709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.180 qpair failed and we were unable to recover it. 00:28:36.180 [2024-10-08 18:36:29.254667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.180 [2024-10-08 18:36:29.254724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.180 [2024-10-08 18:36:29.254738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.180 [2024-10-08 18:36:29.254745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.180 [2024-10-08 18:36:29.254752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.180 [2024-10-08 18:36:29.254767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.180 qpair failed and we were unable to recover it. 00:28:36.180 [2024-10-08 18:36:29.264635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.180 [2024-10-08 18:36:29.264739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.180 [2024-10-08 18:36:29.264753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.180 [2024-10-08 18:36:29.264760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.180 [2024-10-08 18:36:29.264767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.180 [2024-10-08 18:36:29.264781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.180 qpair failed and we were unable to recover it. 00:28:36.180 [2024-10-08 18:36:29.274703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.180 [2024-10-08 18:36:29.274760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.180 [2024-10-08 18:36:29.274773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.180 [2024-10-08 18:36:29.274780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.180 [2024-10-08 18:36:29.274790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.180 [2024-10-08 18:36:29.274804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.180 qpair failed and we were unable to recover it. 00:28:36.180 [2024-10-08 18:36:29.284702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.180 [2024-10-08 18:36:29.284761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.180 [2024-10-08 18:36:29.284775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.180 [2024-10-08 18:36:29.284781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.180 [2024-10-08 18:36:29.284787] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.180 [2024-10-08 18:36:29.284802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.180 qpair failed and we were unable to recover it. 00:28:36.180 [2024-10-08 18:36:29.294701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.180 [2024-10-08 18:36:29.294774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.180 [2024-10-08 18:36:29.294788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.180 [2024-10-08 18:36:29.294795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.180 [2024-10-08 18:36:29.294801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.180 [2024-10-08 18:36:29.294815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.180 qpair failed and we were unable to recover it. 00:28:36.180 [2024-10-08 18:36:29.304736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.180 [2024-10-08 18:36:29.304790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.180 [2024-10-08 18:36:29.304803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.180 [2024-10-08 18:36:29.304810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.180 [2024-10-08 18:36:29.304815] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.180 [2024-10-08 18:36:29.304829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.180 qpair failed and we were unable to recover it. 00:28:36.180 [2024-10-08 18:36:29.314783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.180 [2024-10-08 18:36:29.314840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.180 [2024-10-08 18:36:29.314853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.180 [2024-10-08 18:36:29.314859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.180 [2024-10-08 18:36:29.314865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.180 [2024-10-08 18:36:29.314880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.180 qpair failed and we were unable to recover it. 00:28:36.180 [2024-10-08 18:36:29.324813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.180 [2024-10-08 18:36:29.324891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.180 [2024-10-08 18:36:29.324906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.180 [2024-10-08 18:36:29.324913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.180 [2024-10-08 18:36:29.324919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.180 [2024-10-08 18:36:29.324934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.180 qpair failed and we were unable to recover it. 00:28:36.180 [2024-10-08 18:36:29.334844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.180 [2024-10-08 18:36:29.334898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.180 [2024-10-08 18:36:29.334911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.180 [2024-10-08 18:36:29.334917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.180 [2024-10-08 18:36:29.334923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.180 [2024-10-08 18:36:29.334937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.180 qpair failed and we were unable to recover it. 00:28:36.180 [2024-10-08 18:36:29.344867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.180 [2024-10-08 18:36:29.344920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.180 [2024-10-08 18:36:29.344934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.180 [2024-10-08 18:36:29.344940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.180 [2024-10-08 18:36:29.344946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.180 [2024-10-08 18:36:29.344960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.180 qpair failed and we were unable to recover it. 00:28:36.180 [2024-10-08 18:36:29.354871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.180 [2024-10-08 18:36:29.354923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.180 [2024-10-08 18:36:29.354936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.180 [2024-10-08 18:36:29.354943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.180 [2024-10-08 18:36:29.354949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.180 [2024-10-08 18:36:29.354962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.180 qpair failed and we were unable to recover it. 00:28:36.180 [2024-10-08 18:36:29.364921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.180 [2024-10-08 18:36:29.364979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.180 [2024-10-08 18:36:29.364992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.180 [2024-10-08 18:36:29.365001] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.180 [2024-10-08 18:36:29.365007] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.180 [2024-10-08 18:36:29.365021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.180 qpair failed and we were unable to recover it. 00:28:36.180 [2024-10-08 18:36:29.374939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.180 [2024-10-08 18:36:29.374987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.180 [2024-10-08 18:36:29.375000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.180 [2024-10-08 18:36:29.375006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.181 [2024-10-08 18:36:29.375013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.181 [2024-10-08 18:36:29.375027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.181 qpair failed and we were unable to recover it. 00:28:36.181 [2024-10-08 18:36:29.384969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.181 [2024-10-08 18:36:29.385050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.181 [2024-10-08 18:36:29.385064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.181 [2024-10-08 18:36:29.385071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.181 [2024-10-08 18:36:29.385077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.181 [2024-10-08 18:36:29.385092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.181 qpair failed and we were unable to recover it. 00:28:36.181 [2024-10-08 18:36:29.395006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.181 [2024-10-08 18:36:29.395059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.181 [2024-10-08 18:36:29.395072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.181 [2024-10-08 18:36:29.395079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.181 [2024-10-08 18:36:29.395085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.181 [2024-10-08 18:36:29.395099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.181 qpair failed and we were unable to recover it. 00:28:36.181 [2024-10-08 18:36:29.405052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.181 [2024-10-08 18:36:29.405106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.181 [2024-10-08 18:36:29.405119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.181 [2024-10-08 18:36:29.405126] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.181 [2024-10-08 18:36:29.405132] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.181 [2024-10-08 18:36:29.405146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.181 qpair failed and we were unable to recover it. 00:28:36.181 [2024-10-08 18:36:29.414996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.181 [2024-10-08 18:36:29.415046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.181 [2024-10-08 18:36:29.415059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.181 [2024-10-08 18:36:29.415066] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.181 [2024-10-08 18:36:29.415072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.181 [2024-10-08 18:36:29.415086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.181 qpair failed and we were unable to recover it. 00:28:36.181 [2024-10-08 18:36:29.425091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.181 [2024-10-08 18:36:29.425144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.181 [2024-10-08 18:36:29.425157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.181 [2024-10-08 18:36:29.425164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.181 [2024-10-08 18:36:29.425169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.181 [2024-10-08 18:36:29.425184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.181 qpair failed and we were unable to recover it. 00:28:36.181 [2024-10-08 18:36:29.435147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.181 [2024-10-08 18:36:29.435203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.181 [2024-10-08 18:36:29.435217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.181 [2024-10-08 18:36:29.435223] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.181 [2024-10-08 18:36:29.435229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.181 [2024-10-08 18:36:29.435243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.181 qpair failed and we were unable to recover it. 00:28:36.181 [2024-10-08 18:36:29.445150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.181 [2024-10-08 18:36:29.445200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.181 [2024-10-08 18:36:29.445214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.181 [2024-10-08 18:36:29.445221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.181 [2024-10-08 18:36:29.445227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.181 [2024-10-08 18:36:29.445242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.181 qpair failed and we were unable to recover it. 00:28:36.181 [2024-10-08 18:36:29.455181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.181 [2024-10-08 18:36:29.455235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.181 [2024-10-08 18:36:29.455248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.181 [2024-10-08 18:36:29.455258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.181 [2024-10-08 18:36:29.455264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.181 [2024-10-08 18:36:29.455279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.181 qpair failed and we were unable to recover it. 00:28:36.181 [2024-10-08 18:36:29.465221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.181 [2024-10-08 18:36:29.465276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.181 [2024-10-08 18:36:29.465292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.181 [2024-10-08 18:36:29.465302] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.181 [2024-10-08 18:36:29.465311] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.181 [2024-10-08 18:36:29.465328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.181 qpair failed and we were unable to recover it. 00:28:36.181 [2024-10-08 18:36:29.475264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.181 [2024-10-08 18:36:29.475325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.181 [2024-10-08 18:36:29.475341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.181 [2024-10-08 18:36:29.475349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.181 [2024-10-08 18:36:29.475355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.181 [2024-10-08 18:36:29.475370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.181 qpair failed and we were unable to recover it. 00:28:36.181 [2024-10-08 18:36:29.485273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.181 [2024-10-08 18:36:29.485362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.181 [2024-10-08 18:36:29.485383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.181 [2024-10-08 18:36:29.485391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.181 [2024-10-08 18:36:29.485397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.181 [2024-10-08 18:36:29.485413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.181 qpair failed and we were unable to recover it. 00:28:36.181 [2024-10-08 18:36:29.495297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.181 [2024-10-08 18:36:29.495350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.181 [2024-10-08 18:36:29.495364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.181 [2024-10-08 18:36:29.495370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.181 [2024-10-08 18:36:29.495380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.181 [2024-10-08 18:36:29.495395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.181 qpair failed and we were unable to recover it. 00:28:36.441 [2024-10-08 18:36:29.505312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.441 [2024-10-08 18:36:29.505367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.441 [2024-10-08 18:36:29.505384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.441 [2024-10-08 18:36:29.505391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.441 [2024-10-08 18:36:29.505397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.441 [2024-10-08 18:36:29.505411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.441 qpair failed and we were unable to recover it. 00:28:36.441 [2024-10-08 18:36:29.515357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.441 [2024-10-08 18:36:29.515419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.441 [2024-10-08 18:36:29.515432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.441 [2024-10-08 18:36:29.515439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.441 [2024-10-08 18:36:29.515445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.441 [2024-10-08 18:36:29.515460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.441 qpair failed and we were unable to recover it. 00:28:36.441 [2024-10-08 18:36:29.525403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.441 [2024-10-08 18:36:29.525500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.441 [2024-10-08 18:36:29.525514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.441 [2024-10-08 18:36:29.525520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.441 [2024-10-08 18:36:29.525526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.441 [2024-10-08 18:36:29.525541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.441 qpair failed and we were unable to recover it. 00:28:36.441 [2024-10-08 18:36:29.535461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.441 [2024-10-08 18:36:29.535518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.441 [2024-10-08 18:36:29.535532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.441 [2024-10-08 18:36:29.535538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.441 [2024-10-08 18:36:29.535544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.441 [2024-10-08 18:36:29.535558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.441 qpair failed and we were unable to recover it. 00:28:36.441 [2024-10-08 18:36:29.545391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.441 [2024-10-08 18:36:29.545447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.441 [2024-10-08 18:36:29.545464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.441 [2024-10-08 18:36:29.545471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.441 [2024-10-08 18:36:29.545476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.441 [2024-10-08 18:36:29.545491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.441 qpair failed and we were unable to recover it. 00:28:36.441 [2024-10-08 18:36:29.555458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.441 [2024-10-08 18:36:29.555516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.441 [2024-10-08 18:36:29.555530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.441 [2024-10-08 18:36:29.555537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.441 [2024-10-08 18:36:29.555542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.441 [2024-10-08 18:36:29.555556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.441 qpair failed and we were unable to recover it. 00:28:36.441 [2024-10-08 18:36:29.565490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.441 [2024-10-08 18:36:29.565569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.441 [2024-10-08 18:36:29.565583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.441 [2024-10-08 18:36:29.565590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.441 [2024-10-08 18:36:29.565596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.441 [2024-10-08 18:36:29.565610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.441 qpair failed and we were unable to recover it. 00:28:36.441 [2024-10-08 18:36:29.575489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.442 [2024-10-08 18:36:29.575568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.442 [2024-10-08 18:36:29.575582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.442 [2024-10-08 18:36:29.575589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.442 [2024-10-08 18:36:29.575596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.442 [2024-10-08 18:36:29.575610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.442 qpair failed and we were unable to recover it. 00:28:36.442 [2024-10-08 18:36:29.585546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.442 [2024-10-08 18:36:29.585596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.442 [2024-10-08 18:36:29.585609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.442 [2024-10-08 18:36:29.585616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.442 [2024-10-08 18:36:29.585621] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.442 [2024-10-08 18:36:29.585639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.442 qpair failed and we were unable to recover it. 00:28:36.442 [2024-10-08 18:36:29.595614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.442 [2024-10-08 18:36:29.595696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.442 [2024-10-08 18:36:29.595710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.442 [2024-10-08 18:36:29.595717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.442 [2024-10-08 18:36:29.595723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.442 [2024-10-08 18:36:29.595737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.442 qpair failed and we were unable to recover it. 00:28:36.442 [2024-10-08 18:36:29.605647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.442 [2024-10-08 18:36:29.605712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.442 [2024-10-08 18:36:29.605726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.442 [2024-10-08 18:36:29.605733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.442 [2024-10-08 18:36:29.605739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.442 [2024-10-08 18:36:29.605753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.442 qpair failed and we were unable to recover it. 00:28:36.442 [2024-10-08 18:36:29.615638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.442 [2024-10-08 18:36:29.615704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.442 [2024-10-08 18:36:29.615718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.442 [2024-10-08 18:36:29.615724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.442 [2024-10-08 18:36:29.615730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.442 [2024-10-08 18:36:29.615744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.442 qpair failed and we were unable to recover it. 00:28:36.442 [2024-10-08 18:36:29.625666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.442 [2024-10-08 18:36:29.625722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.442 [2024-10-08 18:36:29.625736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.442 [2024-10-08 18:36:29.625742] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.442 [2024-10-08 18:36:29.625748] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.442 [2024-10-08 18:36:29.625762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.442 qpair failed and we were unable to recover it. 00:28:36.442 [2024-10-08 18:36:29.635702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.442 [2024-10-08 18:36:29.635764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.442 [2024-10-08 18:36:29.635780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.442 [2024-10-08 18:36:29.635786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.442 [2024-10-08 18:36:29.635792] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.442 [2024-10-08 18:36:29.635806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.442 qpair failed and we were unable to recover it. 00:28:36.442 [2024-10-08 18:36:29.645717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.442 [2024-10-08 18:36:29.645772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.442 [2024-10-08 18:36:29.645786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.442 [2024-10-08 18:36:29.645793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.442 [2024-10-08 18:36:29.645799] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.442 [2024-10-08 18:36:29.645813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.442 qpair failed and we were unable to recover it. 00:28:36.442 [2024-10-08 18:36:29.655691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.442 [2024-10-08 18:36:29.655743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.442 [2024-10-08 18:36:29.655757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.442 [2024-10-08 18:36:29.655763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.442 [2024-10-08 18:36:29.655770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.442 [2024-10-08 18:36:29.655783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.442 qpair failed and we were unable to recover it. 00:28:36.442 [2024-10-08 18:36:29.665767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.442 [2024-10-08 18:36:29.665822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.442 [2024-10-08 18:36:29.665836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.442 [2024-10-08 18:36:29.665842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.442 [2024-10-08 18:36:29.665848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.442 [2024-10-08 18:36:29.665863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.442 qpair failed and we were unable to recover it. 00:28:36.442 [2024-10-08 18:36:29.675831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.442 [2024-10-08 18:36:29.675885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.442 [2024-10-08 18:36:29.675899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.442 [2024-10-08 18:36:29.675905] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.442 [2024-10-08 18:36:29.675911] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.442 [2024-10-08 18:36:29.675928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.442 qpair failed and we were unable to recover it. 00:28:36.442 [2024-10-08 18:36:29.685765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.442 [2024-10-08 18:36:29.685829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.442 [2024-10-08 18:36:29.685843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.442 [2024-10-08 18:36:29.685850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.442 [2024-10-08 18:36:29.685856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.442 [2024-10-08 18:36:29.685870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.442 qpair failed and we were unable to recover it. 00:28:36.442 [2024-10-08 18:36:29.695915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.442 [2024-10-08 18:36:29.695976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.442 [2024-10-08 18:36:29.695989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.442 [2024-10-08 18:36:29.695996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.442 [2024-10-08 18:36:29.696001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.442 [2024-10-08 18:36:29.696016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.442 qpair failed and we were unable to recover it. 00:28:36.442 [2024-10-08 18:36:29.705895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.442 [2024-10-08 18:36:29.705946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.442 [2024-10-08 18:36:29.705959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.442 [2024-10-08 18:36:29.705965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.442 [2024-10-08 18:36:29.705971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.442 [2024-10-08 18:36:29.705986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.442 qpair failed and we were unable to recover it. 00:28:36.442 [2024-10-08 18:36:29.715917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.443 [2024-10-08 18:36:29.715972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.443 [2024-10-08 18:36:29.715988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.443 [2024-10-08 18:36:29.715998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.443 [2024-10-08 18:36:29.716007] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.443 [2024-10-08 18:36:29.716026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.443 qpair failed and we were unable to recover it. 00:28:36.443 [2024-10-08 18:36:29.725976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.443 [2024-10-08 18:36:29.726043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.443 [2024-10-08 18:36:29.726058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.443 [2024-10-08 18:36:29.726065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.443 [2024-10-08 18:36:29.726071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.443 [2024-10-08 18:36:29.726086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.443 qpair failed and we were unable to recover it. 00:28:36.443 [2024-10-08 18:36:29.735920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.443 [2024-10-08 18:36:29.736023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.443 [2024-10-08 18:36:29.736038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.443 [2024-10-08 18:36:29.736045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.443 [2024-10-08 18:36:29.736051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.443 [2024-10-08 18:36:29.736066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.443 qpair failed and we were unable to recover it. 00:28:36.443 [2024-10-08 18:36:29.745982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.443 [2024-10-08 18:36:29.746073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.443 [2024-10-08 18:36:29.746088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.443 [2024-10-08 18:36:29.746096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.443 [2024-10-08 18:36:29.746102] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.443 [2024-10-08 18:36:29.746117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.443 qpair failed and we were unable to recover it. 00:28:36.443 [2024-10-08 18:36:29.756068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.443 [2024-10-08 18:36:29.756129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.443 [2024-10-08 18:36:29.756143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.443 [2024-10-08 18:36:29.756150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.443 [2024-10-08 18:36:29.756156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.443 [2024-10-08 18:36:29.756170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.443 qpair failed and we were unable to recover it. 00:28:36.702 [2024-10-08 18:36:29.766105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.702 [2024-10-08 18:36:29.766165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.702 [2024-10-08 18:36:29.766178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.702 [2024-10-08 18:36:29.766185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.702 [2024-10-08 18:36:29.766194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.702 [2024-10-08 18:36:29.766210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.702 qpair failed and we were unable to recover it. 00:28:36.702 [2024-10-08 18:36:29.776045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.702 [2024-10-08 18:36:29.776141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.702 [2024-10-08 18:36:29.776155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.702 [2024-10-08 18:36:29.776161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.702 [2024-10-08 18:36:29.776167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.702 [2024-10-08 18:36:29.776182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.702 qpair failed and we were unable to recover it. 00:28:36.702 [2024-10-08 18:36:29.786127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.702 [2024-10-08 18:36:29.786180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.702 [2024-10-08 18:36:29.786194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.702 [2024-10-08 18:36:29.786201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.702 [2024-10-08 18:36:29.786208] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.702 [2024-10-08 18:36:29.786222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.702 qpair failed and we were unable to recover it. 00:28:36.702 [2024-10-08 18:36:29.796096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.702 [2024-10-08 18:36:29.796169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.702 [2024-10-08 18:36:29.796182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.702 [2024-10-08 18:36:29.796189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.702 [2024-10-08 18:36:29.796195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.702 [2024-10-08 18:36:29.796209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.702 qpair failed and we were unable to recover it. 00:28:36.703 [2024-10-08 18:36:29.806120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.703 [2024-10-08 18:36:29.806172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.703 [2024-10-08 18:36:29.806186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.703 [2024-10-08 18:36:29.806193] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.703 [2024-10-08 18:36:29.806199] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.703 [2024-10-08 18:36:29.806213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.703 qpair failed and we were unable to recover it. 00:28:36.703 [2024-10-08 18:36:29.816164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.703 [2024-10-08 18:36:29.816257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.703 [2024-10-08 18:36:29.816273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.703 [2024-10-08 18:36:29.816280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.703 [2024-10-08 18:36:29.816286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.703 [2024-10-08 18:36:29.816301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.703 qpair failed and we were unable to recover it. 00:28:36.703 [2024-10-08 18:36:29.826164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.703 [2024-10-08 18:36:29.826212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.703 [2024-10-08 18:36:29.826226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.703 [2024-10-08 18:36:29.826233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.703 [2024-10-08 18:36:29.826239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f185c000b90 00:28:36.703 [2024-10-08 18:36:29.826254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.703 qpair failed and we were unable to recover it. 00:28:36.703 [2024-10-08 18:36:29.836274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.703 [2024-10-08 18:36:29.836390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.703 [2024-10-08 18:36:29.836443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.703 [2024-10-08 18:36:29.836467] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.703 [2024-10-08 18:36:29.836486] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1858000b90 00:28:36.703 [2024-10-08 18:36:29.836533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.703 qpair failed and we were unable to recover it. 00:28:36.703 [2024-10-08 18:36:29.846295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.703 [2024-10-08 18:36:29.846369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.703 [2024-10-08 18:36:29.846405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.703 [2024-10-08 18:36:29.846418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.703 [2024-10-08 18:36:29.846429] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1858000b90 00:28:36.703 [2024-10-08 18:36:29.846457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.703 qpair failed and we were unable to recover it. 00:28:36.703 [2024-10-08 18:36:29.856350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.703 [2024-10-08 18:36:29.856475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.703 [2024-10-08 18:36:29.856529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.703 [2024-10-08 18:36:29.856562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.703 [2024-10-08 18:36:29.856581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1864000b90 00:28:36.703 [2024-10-08 18:36:29.856628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.703 qpair failed and we were unable to recover it. 00:28:36.703 [2024-10-08 18:36:29.866321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.703 [2024-10-08 18:36:29.866417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.703 [2024-10-08 18:36:29.866445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.703 [2024-10-08 18:36:29.866459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.703 [2024-10-08 18:36:29.866472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1864000b90 00:28:36.703 [2024-10-08 18:36:29.866499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.703 qpair failed and we were unable to recover it. 00:28:36.703 [2024-10-08 18:36:29.866687] nvme_ctrlr.c:4536:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:36.703 A controller has encountered a failure and is being reset. 00:28:36.703 [2024-10-08 18:36:29.876541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.703 [2024-10-08 18:36:29.876659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.703 [2024-10-08 18:36:29.876729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.703 [2024-10-08 18:36:29.876766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.703 [2024-10-08 18:36:29.876798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa01c60 00:28:36.703 [2024-10-08 18:36:29.876860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.703 qpair failed and we were unable to recover it. 00:28:36.703 [2024-10-08 18:36:29.886438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.703 [2024-10-08 18:36:29.886532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.703 [2024-10-08 18:36:29.886563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.703 [2024-10-08 18:36:29.886584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.703 [2024-10-08 18:36:29.886603] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa01c60 00:28:36.703 [2024-10-08 18:36:29.886643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.703 qpair failed and we were unable to recover it. 00:28:36.703 Controller properly reset. 00:28:36.703 Initializing NVMe Controllers 00:28:36.703 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:36.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:36.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:36.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:36.703 Initialization complete. Launching workers. 00:28:36.703 Starting thread on core 1 00:28:36.703 Starting thread on core 2 00:28:36.703 Starting thread on core 3 00:28:36.703 Starting thread on core 0 00:28:36.703 18:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:36.703 00:28:36.703 real 0m11.498s 00:28:36.703 user 0m21.473s 00:28:36.703 sys 0m4.718s 00:28:36.703 18:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:36.703 18:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:36.703 ************************************ 00:28:36.703 END TEST nvmf_target_disconnect_tc2 00:28:36.703 ************************************ 00:28:36.703 18:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:36.703 18:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:36.703 18:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:36.703 18:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:36.703 18:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:36.703 18:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:36.703 18:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:36.703 18:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:36.703 18:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:36.703 rmmod nvme_tcp 00:28:36.703 rmmod nvme_fabrics 00:28:36.703 rmmod nvme_keyring 00:28:36.703 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:36.962 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:36.963 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:36.963 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 584405 ']' 00:28:36.963 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 584405 00:28:36.963 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 584405 ']' 00:28:36.963 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 584405 00:28:36.963 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:28:36.963 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:36.963 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 584405 00:28:36.963 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:28:36.963 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:28:36.963 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 584405' 00:28:36.963 killing process with pid 584405 00:28:36.963 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 584405 00:28:36.963 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 584405 00:28:37.222 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:37.222 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:37.222 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:37.222 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:37.222 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:28:37.222 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:37.222 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:28:37.222 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:37.222 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:37.222 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.222 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.222 18:36:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.126 18:36:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:39.126 00:28:39.126 real 0m20.321s 00:28:39.126 user 0m49.585s 00:28:39.126 sys 0m9.654s 00:28:39.126 18:36:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:39.126 18:36:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:39.126 ************************************ 00:28:39.126 END TEST nvmf_target_disconnect 00:28:39.126 ************************************ 00:28:39.126 18:36:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:39.126 00:28:39.126 real 6m9.659s 00:28:39.126 user 11m16.967s 00:28:39.126 sys 1m59.244s 00:28:39.126 18:36:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:39.126 18:36:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.126 ************************************ 00:28:39.126 END TEST nvmf_host 00:28:39.126 ************************************ 00:28:39.126 18:36:32 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:39.126 18:36:32 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:39.126 18:36:32 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:39.126 18:36:32 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:39.126 18:36:32 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:39.126 18:36:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:39.384 ************************************ 00:28:39.384 START TEST nvmf_target_core_interrupt_mode 00:28:39.384 ************************************ 00:28:39.384 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:39.384 * Looking for test storage... 00:28:39.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:39.384 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:39.384 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:28:39.384 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:39.384 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:39.384 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.384 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.384 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.384 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.384 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.384 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.384 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.384 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.384 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:39.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.385 --rc genhtml_branch_coverage=1 00:28:39.385 --rc genhtml_function_coverage=1 00:28:39.385 --rc genhtml_legend=1 00:28:39.385 --rc geninfo_all_blocks=1 00:28:39.385 --rc geninfo_unexecuted_blocks=1 00:28:39.385 00:28:39.385 ' 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:39.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.385 --rc genhtml_branch_coverage=1 00:28:39.385 --rc genhtml_function_coverage=1 00:28:39.385 --rc genhtml_legend=1 00:28:39.385 --rc geninfo_all_blocks=1 00:28:39.385 --rc geninfo_unexecuted_blocks=1 00:28:39.385 00:28:39.385 ' 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:39.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.385 --rc genhtml_branch_coverage=1 00:28:39.385 --rc genhtml_function_coverage=1 00:28:39.385 --rc genhtml_legend=1 00:28:39.385 --rc geninfo_all_blocks=1 00:28:39.385 --rc geninfo_unexecuted_blocks=1 00:28:39.385 00:28:39.385 ' 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:39.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.385 --rc genhtml_branch_coverage=1 00:28:39.385 --rc genhtml_function_coverage=1 00:28:39.385 --rc genhtml_legend=1 00:28:39.385 --rc geninfo_all_blocks=1 00:28:39.385 --rc geninfo_unexecuted_blocks=1 00:28:39.385 00:28:39.385 ' 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:39.385 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:39.644 ************************************ 00:28:39.644 START TEST nvmf_abort 00:28:39.644 ************************************ 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:39.644 * Looking for test storage... 00:28:39.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:39.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.644 --rc genhtml_branch_coverage=1 00:28:39.644 --rc genhtml_function_coverage=1 00:28:39.644 --rc genhtml_legend=1 00:28:39.644 --rc geninfo_all_blocks=1 00:28:39.644 --rc geninfo_unexecuted_blocks=1 00:28:39.644 00:28:39.644 ' 00:28:39.644 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:39.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.644 --rc genhtml_branch_coverage=1 00:28:39.644 --rc genhtml_function_coverage=1 00:28:39.644 --rc genhtml_legend=1 00:28:39.644 --rc geninfo_all_blocks=1 00:28:39.645 --rc geninfo_unexecuted_blocks=1 00:28:39.645 00:28:39.645 ' 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:39.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.645 --rc genhtml_branch_coverage=1 00:28:39.645 --rc genhtml_function_coverage=1 00:28:39.645 --rc genhtml_legend=1 00:28:39.645 --rc geninfo_all_blocks=1 00:28:39.645 --rc geninfo_unexecuted_blocks=1 00:28:39.645 00:28:39.645 ' 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:39.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.645 --rc genhtml_branch_coverage=1 00:28:39.645 --rc genhtml_function_coverage=1 00:28:39.645 --rc genhtml_legend=1 00:28:39.645 --rc geninfo_all_blocks=1 00:28:39.645 --rc geninfo_unexecuted_blocks=1 00:28:39.645 00:28:39.645 ' 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:39.645 18:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:46.215 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:46.215 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:46.215 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:46.216 Found net devices under 0000:86:00.0: cvl_0_0 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:46.216 Found net devices under 0000:86:00.1: cvl_0_1 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:46.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:28:46.216 00:28:46.216 --- 10.0.0.2 ping statistics --- 00:28:46.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.216 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:28:46.216 00:28:46.216 --- 10.0.0.1 ping statistics --- 00:28:46.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.216 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=589153 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 589153 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 589153 ']' 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:46.216 18:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.216 [2024-10-08 18:36:38.907466] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:46.216 [2024-10-08 18:36:38.908315] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:28:46.216 [2024-10-08 18:36:38.908346] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.216 [2024-10-08 18:36:38.965341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:46.216 [2024-10-08 18:36:39.041676] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.216 [2024-10-08 18:36:39.041717] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.216 [2024-10-08 18:36:39.041725] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.216 [2024-10-08 18:36:39.041731] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.216 [2024-10-08 18:36:39.041736] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.216 [2024-10-08 18:36:39.042682] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.216 [2024-10-08 18:36:39.042790] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.216 [2024-10-08 18:36:39.042791] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.216 [2024-10-08 18:36:39.120704] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:46.216 [2024-10-08 18:36:39.121537] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:46.216 [2024-10-08 18:36:39.121539] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:46.216 [2024-10-08 18:36:39.121736] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:46.476 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:46.476 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:28:46.476 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:46.476 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:46.476 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.476 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.476 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:46.476 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.476 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.476 [2024-10-08 18:36:39.787666] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.735 Malloc0 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.735 Delay0 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.735 [2024-10-08 18:36:39.863577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.735 18:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:46.735 [2024-10-08 18:36:39.983242] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:49.269 Initializing NVMe Controllers 00:28:49.269 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:49.269 controller IO queue size 128 less than required 00:28:49.269 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:49.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:49.269 Initialization complete. Launching workers. 00:28:49.269 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37938 00:28:49.269 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37995, failed to submit 66 00:28:49.269 success 37938, unsuccessful 57, failed 0 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.269 rmmod nvme_tcp 00:28:49.269 rmmod nvme_fabrics 00:28:49.269 rmmod nvme_keyring 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 589153 ']' 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 589153 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 589153 ']' 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 589153 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 589153 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 589153' 00:28:49.269 killing process with pid 589153 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 589153 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 589153 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.269 18:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.171 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:51.171 00:28:51.171 real 0m11.757s 00:28:51.171 user 0m10.500s 00:28:51.171 sys 0m5.689s 00:28:51.171 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:51.171 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:51.171 ************************************ 00:28:51.171 END TEST nvmf_abort 00:28:51.171 ************************************ 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:51.430 ************************************ 00:28:51.430 START TEST nvmf_ns_hotplug_stress 00:28:51.430 ************************************ 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:51.430 * Looking for test storage... 00:28:51.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:51.430 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:51.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.431 --rc genhtml_branch_coverage=1 00:28:51.431 --rc genhtml_function_coverage=1 00:28:51.431 --rc genhtml_legend=1 00:28:51.431 --rc geninfo_all_blocks=1 00:28:51.431 --rc geninfo_unexecuted_blocks=1 00:28:51.431 00:28:51.431 ' 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:51.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.431 --rc genhtml_branch_coverage=1 00:28:51.431 --rc genhtml_function_coverage=1 00:28:51.431 --rc genhtml_legend=1 00:28:51.431 --rc geninfo_all_blocks=1 00:28:51.431 --rc geninfo_unexecuted_blocks=1 00:28:51.431 00:28:51.431 ' 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:51.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.431 --rc genhtml_branch_coverage=1 00:28:51.431 --rc genhtml_function_coverage=1 00:28:51.431 --rc genhtml_legend=1 00:28:51.431 --rc geninfo_all_blocks=1 00:28:51.431 --rc geninfo_unexecuted_blocks=1 00:28:51.431 00:28:51.431 ' 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:51.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.431 --rc genhtml_branch_coverage=1 00:28:51.431 --rc genhtml_function_coverage=1 00:28:51.431 --rc genhtml_legend=1 00:28:51.431 --rc geninfo_all_blocks=1 00:28:51.431 --rc geninfo_unexecuted_blocks=1 00:28:51.431 00:28:51.431 ' 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.431 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:51.690 18:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:58.254 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:58.254 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.254 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:58.255 Found net devices under 0000:86:00.0: cvl_0_0 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:58.255 Found net devices under 0000:86:00.1: cvl_0_1 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:58.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:28:58.255 00:28:58.255 --- 10.0.0.2 ping statistics --- 00:28:58.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.255 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:28:58.255 00:28:58.255 --- 10.0.0.1 ping statistics --- 00:28:58.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.255 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=593150 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 593150 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 593150 ']' 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:58.255 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:58.255 [2024-10-08 18:36:50.721397] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:58.255 [2024-10-08 18:36:50.722290] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:28:58.255 [2024-10-08 18:36:50.722322] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.255 [2024-10-08 18:36:50.795205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:58.255 [2024-10-08 18:36:50.872790] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.255 [2024-10-08 18:36:50.872829] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.255 [2024-10-08 18:36:50.872836] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.255 [2024-10-08 18:36:50.872843] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.256 [2024-10-08 18:36:50.872848] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.256 [2024-10-08 18:36:50.873809] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.256 [2024-10-08 18:36:50.873902] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.256 [2024-10-08 18:36:50.873901] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.256 [2024-10-08 18:36:50.951067] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:58.256 [2024-10-08 18:36:50.951197] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:58.256 [2024-10-08 18:36:50.951443] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:58.256 [2024-10-08 18:36:50.951768] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:58.256 18:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:58.256 18:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:28:58.256 18:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:58.256 18:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:58.256 18:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:58.514 18:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.514 18:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:58.514 18:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:58.514 [2024-10-08 18:36:51.774870] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.514 18:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:58.773 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.032 [2024-10-08 18:36:52.175296] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.032 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:59.290 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:59.290 Malloc0 00:28:59.290 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:59.549 Delay0 00:28:59.549 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:59.807 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:00.066 NULL1 00:29:00.066 18:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:00.066 18:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:00.066 18:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=593637 00:29:00.066 18:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:00.066 18:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.324 18:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:00.582 18:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:00.582 18:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:00.840 true 00:29:00.840 18:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:00.840 18:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:01.099 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.099 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:01.099 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:01.357 true 00:29:01.357 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:01.357 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:01.615 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.872 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:01.872 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:02.130 true 00:29:02.130 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:02.130 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.388 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.388 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:02.388 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:02.646 true 00:29:02.646 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:02.646 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.904 18:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:03.161 18:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:03.161 18:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:03.161 true 00:29:03.419 18:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:03.419 18:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.419 18:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:03.676 18:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:03.676 18:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:03.934 true 00:29:03.934 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:03.934 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.191 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.449 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:04.449 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:04.449 true 00:29:04.449 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:04.449 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.707 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.965 18:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:04.965 18:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:05.223 true 00:29:05.223 18:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:05.223 18:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.536 18:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:05.536 18:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:05.536 18:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:05.864 true 00:29:05.864 18:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:05.864 18:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.123 18:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.123 18:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:06.123 18:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:06.380 true 00:29:06.380 18:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:06.380 18:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.637 18:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.895 18:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:06.895 18:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:07.154 true 00:29:07.154 18:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:07.154 18:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.412 18:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:07.412 18:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:07.412 18:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:07.670 true 00:29:07.670 18:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:07.670 18:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.927 18:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.185 18:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:08.185 18:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:08.185 true 00:29:08.443 18:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:08.443 18:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.443 18:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.702 18:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:08.702 18:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:08.959 true 00:29:08.959 18:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:08.959 18:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.218 18:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:09.476 18:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:09.476 18:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:09.476 true 00:29:09.476 18:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:09.476 18:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.734 18:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:09.992 18:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:09.992 18:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:10.250 true 00:29:10.250 18:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:10.250 18:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.508 18:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.508 18:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:10.508 18:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:10.765 true 00:29:10.765 18:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:10.765 18:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.023 18:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.281 18:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:11.281 18:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:11.539 true 00:29:11.539 18:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:11.539 18:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.539 18:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.797 18:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:11.797 18:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:12.055 true 00:29:12.055 18:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:12.055 18:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.313 18:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.571 18:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:12.571 18:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:12.571 true 00:29:12.571 18:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:12.571 18:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.829 18:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:13.087 18:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:13.087 18:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:13.345 true 00:29:13.346 18:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:13.346 18:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.603 18:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:13.861 18:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:13.861 18:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:13.861 true 00:29:13.861 18:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:13.861 18:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:14.119 18:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.376 18:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:14.376 18:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:14.635 true 00:29:14.635 18:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:14.635 18:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:14.893 18:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.893 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:14.893 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:15.153 true 00:29:15.153 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:15.153 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.418 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:15.677 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:15.677 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:15.677 true 00:29:15.935 18:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:15.935 18:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.935 18:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.193 18:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:16.193 18:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:16.450 true 00:29:16.450 18:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:16.450 18:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:16.709 18:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.967 18:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:16.967 18:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:16.967 true 00:29:16.967 18:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:16.967 18:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:17.225 18:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:17.483 18:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:17.483 18:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:17.742 true 00:29:17.742 18:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:17.742 18:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.000 18:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:18.258 18:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:29:18.258 18:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:29:18.258 true 00:29:18.258 18:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:18.258 18:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.515 18:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:18.774 18:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:29:18.774 18:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:29:19.032 true 00:29:19.032 18:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:19.032 18:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:19.290 18:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:19.290 18:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:29:19.290 18:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:29:19.549 true 00:29:19.549 18:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:19.549 18:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:19.807 18:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:20.066 18:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:29:20.066 18:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:29:20.066 true 00:29:20.324 18:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:20.324 18:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.324 18:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:20.582 18:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:29:20.582 18:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:29:20.841 true 00:29:20.841 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:20.841 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.099 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:21.099 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:29:21.099 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:29:21.356 true 00:29:21.357 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:21.357 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.615 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:21.874 18:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:29:21.874 18:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:29:22.132 true 00:29:22.132 18:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:22.132 18:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.391 18:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:22.391 18:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:29:22.391 18:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:29:22.649 true 00:29:22.649 18:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:22.649 18:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.907 18:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:23.165 18:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:29:23.165 18:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:29:23.423 true 00:29:23.423 18:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:23.423 18:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.681 18:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:23.681 18:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:29:23.681 18:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:29:23.939 true 00:29:23.939 18:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:23.939 18:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.197 18:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:24.455 18:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:29:24.455 18:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:29:24.714 true 00:29:24.714 18:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:24.714 18:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.973 18:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:24.973 18:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:29:24.973 18:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:29:25.234 true 00:29:25.234 18:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:25.234 18:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:25.493 18:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:25.751 18:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:29:25.751 18:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:29:26.009 true 00:29:26.009 18:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:26.009 18:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.266 18:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:26.267 18:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:29:26.267 18:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:29:26.524 true 00:29:26.524 18:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:26.524 18:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.783 18:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.041 18:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:29:27.041 18:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:29:27.299 true 00:29:27.299 18:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:27.299 18:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.558 18:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.558 18:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:29:27.558 18:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:29:27.817 true 00:29:27.817 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:27.817 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:28.075 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:28.333 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:29:28.333 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:29:28.591 true 00:29:28.591 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:28.591 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:28.850 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:28.850 18:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:29:28.850 18:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:29:29.108 true 00:29:29.108 18:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:29.108 18:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:29.366 18:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:29.625 18:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:29:29.625 18:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:29:29.883 true 00:29:29.883 18:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:29.883 18:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.141 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:30.141 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:29:30.141 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:29:30.401 true 00:29:30.401 Initializing NVMe Controllers 00:29:30.401 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:30.401 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:29:30.401 Controller IO queue size 128, less than required. 00:29:30.401 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:30.401 WARNING: Some requested NVMe devices were skipped 00:29:30.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:30.401 Initialization complete. Launching workers. 00:29:30.401 ======================================================== 00:29:30.401 Latency(us) 00:29:30.401 Device Information : IOPS MiB/s Average min max 00:29:30.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27987.88 13.67 4573.41 1342.76 9153.94 00:29:30.401 ======================================================== 00:29:30.401 Total : 27987.88 13.67 4573.41 1342.76 9153.94 00:29:30.401 00:29:30.401 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 593637 00:29:30.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (593637) - No such process 00:29:30.401 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 593637 00:29:30.401 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.658 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:30.916 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:30.916 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:30.916 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:30.916 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:30.916 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:30.916 null0 00:29:30.916 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:30.916 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:30.916 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:31.174 null1 00:29:31.174 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:31.174 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:31.174 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:31.433 null2 00:29:31.433 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:31.433 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:31.433 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:31.433 null3 00:29:31.691 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:31.691 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:31.691 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:31.691 null4 00:29:31.691 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:31.691 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:31.691 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:31.949 null5 00:29:31.949 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:31.949 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:31.949 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:32.208 null6 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:32.208 null7 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:32.208 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 598811 598812 598814 598817 598818 598820 598822 598824 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.209 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:32.468 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.468 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:32.468 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:32.468 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:32.468 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:32.468 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:32.468 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:32.468 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.804 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:32.805 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.805 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.805 18:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:33.067 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.067 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:33.067 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:33.067 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:33.067 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:33.067 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:33.067 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:33.067 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:33.067 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.067 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.067 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:33.067 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.067 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.068 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:33.327 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.327 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:33.327 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:33.327 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:33.327 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:33.327 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:33.327 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:33.327 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.586 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:33.844 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:33.844 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:33.844 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:33.844 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.844 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:33.844 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:33.844 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:33.844 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.844 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:34.103 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.103 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.103 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:34.103 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:34.103 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:34.103 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:34.103 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:34.103 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:34.103 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.103 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:34.103 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:34.361 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.361 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.361 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:34.361 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.361 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.361 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:34.361 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.361 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.361 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:34.361 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.361 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.361 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.361 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.361 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:34.361 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:34.361 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.362 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.362 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:34.362 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.362 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.362 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:34.362 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.362 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.362 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:34.620 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:34.620 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.620 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:34.620 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:34.620 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:34.620 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:34.620 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:34.620 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.878 18:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:34.879 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.879 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:34.879 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:34.879 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:34.879 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:34.879 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:34.879 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:34.879 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.137 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:35.395 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:35.395 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.395 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:35.395 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:35.395 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:35.395 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:35.395 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:35.395 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.653 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:35.912 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:35.912 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:35.912 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:35.912 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.912 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:36.170 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:36.170 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:36.170 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:36.170 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:36.170 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:36.170 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:36.170 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:36.170 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:36.428 rmmod nvme_tcp 00:29:36.428 rmmod nvme_fabrics 00:29:36.428 rmmod nvme_keyring 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 593150 ']' 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 593150 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 593150 ']' 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 593150 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 593150 00:29:36.428 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:36.429 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:36.429 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 593150' 00:29:36.429 killing process with pid 593150 00:29:36.429 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 593150 00:29:36.429 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 593150 00:29:36.686 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:36.686 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:36.686 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:36.686 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:36.686 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:29:36.686 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:36.686 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:29:36.686 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:36.686 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:36.686 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.686 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.686 18:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:39.220 00:29:39.220 real 0m47.478s 00:29:39.220 user 3m1.316s 00:29:39.220 sys 0m21.076s 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:39.220 ************************************ 00:29:39.220 END TEST nvmf_ns_hotplug_stress 00:29:39.220 ************************************ 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:39.220 ************************************ 00:29:39.220 START TEST nvmf_delete_subsystem 00:29:39.220 ************************************ 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:39.220 * Looking for test storage... 00:29:39.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:39.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.220 --rc genhtml_branch_coverage=1 00:29:39.220 --rc genhtml_function_coverage=1 00:29:39.220 --rc genhtml_legend=1 00:29:39.220 --rc geninfo_all_blocks=1 00:29:39.220 --rc geninfo_unexecuted_blocks=1 00:29:39.220 00:29:39.220 ' 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:39.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.220 --rc genhtml_branch_coverage=1 00:29:39.220 --rc genhtml_function_coverage=1 00:29:39.220 --rc genhtml_legend=1 00:29:39.220 --rc geninfo_all_blocks=1 00:29:39.220 --rc geninfo_unexecuted_blocks=1 00:29:39.220 00:29:39.220 ' 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:39.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.220 --rc genhtml_branch_coverage=1 00:29:39.220 --rc genhtml_function_coverage=1 00:29:39.220 --rc genhtml_legend=1 00:29:39.220 --rc geninfo_all_blocks=1 00:29:39.220 --rc geninfo_unexecuted_blocks=1 00:29:39.220 00:29:39.220 ' 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:39.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.220 --rc genhtml_branch_coverage=1 00:29:39.220 --rc genhtml_function_coverage=1 00:29:39.220 --rc genhtml_legend=1 00:29:39.220 --rc geninfo_all_blocks=1 00:29:39.220 --rc geninfo_unexecuted_blocks=1 00:29:39.220 00:29:39.220 ' 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.220 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:39.221 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:45.787 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:45.788 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:45.788 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:45.788 Found net devices under 0000:86:00.0: cvl_0_0 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:45.788 Found net devices under 0000:86:00.1: cvl_0_1 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:45.788 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:45.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:45.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:29:45.788 00:29:45.788 --- 10.0.0.2 ping statistics --- 00:29:45.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.788 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:45.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:45.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:29:45.788 00:29:45.788 --- 10.0.0.1 ping statistics --- 00:29:45.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.788 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=603198 00:29:45.788 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 603198 00:29:45.789 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:45.789 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 603198 ']' 00:29:45.789 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:45.789 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:45.789 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:45.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:45.789 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:45.789 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:45.789 [2024-10-08 18:37:38.291532] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:45.789 [2024-10-08 18:37:38.292551] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:29:45.789 [2024-10-08 18:37:38.292591] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:45.789 [2024-10-08 18:37:38.365853] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:45.789 [2024-10-08 18:37:38.444543] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:45.789 [2024-10-08 18:37:38.444578] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:45.789 [2024-10-08 18:37:38.444585] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:45.789 [2024-10-08 18:37:38.444591] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:45.789 [2024-10-08 18:37:38.444596] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:45.789 [2024-10-08 18:37:38.445365] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.789 [2024-10-08 18:37:38.445366] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.789 [2024-10-08 18:37:38.511963] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:45.789 [2024-10-08 18:37:38.512483] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:45.789 [2024-10-08 18:37:38.512702] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:46.048 [2024-10-08 18:37:39.174173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:46.048 [2024-10-08 18:37:39.214558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:46.048 NULL1 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:46.048 Delay0 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=603445 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:46.048 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:46.048 [2024-10-08 18:37:39.315873] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:47.950 18:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:47.950 18:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.950 18:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 starting I/O failed: -6 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 starting I/O failed: -6 00:29:48.209 Write completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 starting I/O failed: -6 00:29:48.209 Write completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 starting I/O failed: -6 00:29:48.209 Write completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 starting I/O failed: -6 00:29:48.209 Write completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 starting I/O failed: -6 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Write completed with error (sct=0, sc=8) 00:29:48.209 starting I/O failed: -6 00:29:48.209 Write completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 starting I/O failed: -6 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Write completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Write completed with error (sct=0, sc=8) 00:29:48.209 starting I/O failed: -6 00:29:48.209 Write completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 starting I/O failed: -6 00:29:48.209 Write completed with error (sct=0, sc=8) 00:29:48.209 Write completed with error (sct=0, sc=8) 00:29:48.209 Write completed with error (sct=0, sc=8) 00:29:48.209 Write completed with error (sct=0, sc=8) 00:29:48.209 starting I/O failed: -6 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 Write completed with error (sct=0, sc=8) 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.209 [2024-10-08 18:37:41.388797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2381930 is same with the state(6) to be set 00:29:48.209 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 starting I/O failed: -6 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 starting I/O failed: -6 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 starting I/O failed: -6 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 starting I/O failed: -6 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 starting I/O failed: -6 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 starting I/O failed: -6 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 starting I/O failed: -6 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 starting I/O failed: -6 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 starting I/O failed: -6 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 [2024-10-08 18:37:41.392613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fef60000c00 is same with the state(6) to be set 00:29:48.210 starting I/O failed: -6 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Write completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:48.210 Read completed with error (sct=0, sc=8) 00:29:49.147 [2024-10-08 18:37:42.368414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2382a70 is same with the state(6) to be set 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Write completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Write completed with error (sct=0, sc=8) 00:29:49.147 Write completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Write completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Write completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Write completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 [2024-10-08 18:37:42.391937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2381390 is same with the state(6) to be set 00:29:49.147 Write completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Write completed with error (sct=0, sc=8) 00:29:49.147 Write completed with error (sct=0, sc=8) 00:29:49.147 Write completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Write completed with error (sct=0, sc=8) 00:29:49.147 Write completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Write completed with error (sct=0, sc=8) 00:29:49.147 Write completed with error (sct=0, sc=8) 00:29:49.147 Write completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Write completed with error (sct=0, sc=8) 00:29:49.147 Read completed with error (sct=0, sc=8) 00:29:49.147 Write completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Write completed with error (sct=0, sc=8) 00:29:49.148 Write completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 [2024-10-08 18:37:42.392262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2381750 is same with the state(6) to be set 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Write completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Write completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Write completed with error (sct=0, sc=8) 00:29:49.148 Write completed with error (sct=0, sc=8) 00:29:49.148 Write completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Write completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 [2024-10-08 18:37:42.392888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fef6000cfe0 is same with the state(6) to be set 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Write completed with error (sct=0, sc=8) 00:29:49.148 Write completed with error (sct=0, sc=8) 00:29:49.148 Write completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 Write completed with error (sct=0, sc=8) 00:29:49.148 Write completed with error (sct=0, sc=8) 00:29:49.148 Read completed with error (sct=0, sc=8) 00:29:49.148 [2024-10-08 18:37:42.393598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fef6000d7a0 is same with the state(6) to be set 00:29:49.148 Initializing NVMe Controllers 00:29:49.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:49.148 Controller IO queue size 128, less than required. 00:29:49.148 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:49.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:49.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:49.148 Initialization complete. Launching workers. 00:29:49.148 ======================================================== 00:29:49.148 Latency(us) 00:29:49.148 Device Information : IOPS MiB/s Average min max 00:29:49.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.17 0.08 888239.67 306.62 1006561.03 00:29:49.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.77 0.07 939053.32 209.14 1010018.43 00:29:49.148 ======================================================== 00:29:49.148 Total : 324.93 0.16 911973.46 209.14 1010018.43 00:29:49.148 00:29:49.148 [2024-10-08 18:37:42.394255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2382a70 (9): Bad file descriptor 00:29:49.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:49.148 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.148 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:49.148 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 603445 00:29:49.148 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 603445 00:29:49.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (603445) - No such process 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 603445 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 603445 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 603445 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:49.716 [2024-10-08 18:37:42.922521] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=603916 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 603916 00:29:49.716 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:49.716 [2024-10-08 18:37:42.994062] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:50.284 18:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:50.284 18:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 603916 00:29:50.284 18:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:50.852 18:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:50.852 18:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 603916 00:29:50.852 18:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:51.419 18:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:51.419 18:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 603916 00:29:51.419 18:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:51.677 18:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:51.677 18:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 603916 00:29:51.677 18:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:52.244 18:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:52.244 18:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 603916 00:29:52.244 18:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:52.811 18:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:52.811 18:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 603916 00:29:52.811 18:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:52.811 Initializing NVMe Controllers 00:29:52.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:52.811 Controller IO queue size 128, less than required. 00:29:52.811 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:52.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:52.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:52.811 Initialization complete. Launching workers. 00:29:52.811 ======================================================== 00:29:52.811 Latency(us) 00:29:52.811 Device Information : IOPS MiB/s Average min max 00:29:52.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004109.47 1000125.66 1042757.58 00:29:52.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004382.77 1000182.11 1041668.37 00:29:52.811 ======================================================== 00:29:52.811 Total : 256.00 0.12 1004246.12 1000125.66 1042757.58 00:29:52.811 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 603916 00:29:53.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (603916) - No such process 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 603916 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:53.379 rmmod nvme_tcp 00:29:53.379 rmmod nvme_fabrics 00:29:53.379 rmmod nvme_keyring 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 603198 ']' 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 603198 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 603198 ']' 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 603198 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 603198 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 603198' 00:29:53.379 killing process with pid 603198 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 603198 00:29:53.379 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 603198 00:29:53.684 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:53.684 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:53.684 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:53.684 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:53.684 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:29:53.684 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:53.684 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:29:53.684 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:53.684 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:53.684 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.684 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.684 18:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.588 18:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:55.588 00:29:55.588 real 0m16.755s 00:29:55.588 user 0m26.073s 00:29:55.588 sys 0m6.207s 00:29:55.588 18:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:55.588 18:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:55.588 ************************************ 00:29:55.588 END TEST nvmf_delete_subsystem 00:29:55.588 ************************************ 00:29:55.588 18:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:55.588 18:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:55.588 18:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:55.588 18:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:55.848 ************************************ 00:29:55.848 START TEST nvmf_host_management 00:29:55.848 ************************************ 00:29:55.848 18:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:55.848 * Looking for test storage... 00:29:55.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:55.848 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:55.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.849 --rc genhtml_branch_coverage=1 00:29:55.849 --rc genhtml_function_coverage=1 00:29:55.849 --rc genhtml_legend=1 00:29:55.849 --rc geninfo_all_blocks=1 00:29:55.849 --rc geninfo_unexecuted_blocks=1 00:29:55.849 00:29:55.849 ' 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:55.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.849 --rc genhtml_branch_coverage=1 00:29:55.849 --rc genhtml_function_coverage=1 00:29:55.849 --rc genhtml_legend=1 00:29:55.849 --rc geninfo_all_blocks=1 00:29:55.849 --rc geninfo_unexecuted_blocks=1 00:29:55.849 00:29:55.849 ' 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:55.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.849 --rc genhtml_branch_coverage=1 00:29:55.849 --rc genhtml_function_coverage=1 00:29:55.849 --rc genhtml_legend=1 00:29:55.849 --rc geninfo_all_blocks=1 00:29:55.849 --rc geninfo_unexecuted_blocks=1 00:29:55.849 00:29:55.849 ' 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:55.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.849 --rc genhtml_branch_coverage=1 00:29:55.849 --rc genhtml_function_coverage=1 00:29:55.849 --rc genhtml_legend=1 00:29:55.849 --rc geninfo_all_blocks=1 00:29:55.849 --rc geninfo_unexecuted_blocks=1 00:29:55.849 00:29:55.849 ' 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:55.849 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:02.415 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:02.415 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:02.415 Found net devices under 0000:86:00.0: cvl_0_0 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:02.415 Found net devices under 0000:86:00.1: cvl_0_1 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:02.415 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:02.416 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:02.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:02.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:30:02.416 00:30:02.416 --- 10.0.0.2 ping statistics --- 00:30:02.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.416 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:02.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:02.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:30:02.416 00:30:02.416 --- 10.0.0.1 ping statistics --- 00:30:02.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.416 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=608124 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 608124 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 608124 ']' 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:02.416 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:02.416 [2024-10-08 18:37:55.113953] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:02.416 [2024-10-08 18:37:55.114857] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:30:02.416 [2024-10-08 18:37:55.114894] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.416 [2024-10-08 18:37:55.186628] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:02.416 [2024-10-08 18:37:55.257941] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.416 [2024-10-08 18:37:55.257985] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.416 [2024-10-08 18:37:55.257992] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.416 [2024-10-08 18:37:55.257997] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.416 [2024-10-08 18:37:55.258003] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.416 [2024-10-08 18:37:55.259525] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:02.416 [2024-10-08 18:37:55.259634] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:30:02.416 [2024-10-08 18:37:55.259741] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.416 [2024-10-08 18:37:55.259742] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:30:02.416 [2024-10-08 18:37:55.346244] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:02.416 [2024-10-08 18:37:55.346522] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:02.416 [2024-10-08 18:37:55.346893] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:02.416 [2024-10-08 18:37:55.347254] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:02.416 [2024-10-08 18:37:55.347332] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:02.675 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:02.675 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:30:02.675 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:02.675 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:02.675 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:02.675 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:02.675 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:02.675 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.675 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:02.675 [2024-10-08 18:37:55.988522] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.934 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.934 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:02.934 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:02.934 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:02.934 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:02.934 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:02.934 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:02.934 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:02.935 Malloc0 00:30:02.935 [2024-10-08 18:37:56.068844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=608192 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 608192 /var/tmp/bdevperf.sock 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 608192 ']' 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:02.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:02.935 { 00:30:02.935 "params": { 00:30:02.935 "name": "Nvme$subsystem", 00:30:02.935 "trtype": "$TEST_TRANSPORT", 00:30:02.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:02.935 "adrfam": "ipv4", 00:30:02.935 "trsvcid": "$NVMF_PORT", 00:30:02.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:02.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:02.935 "hdgst": ${hdgst:-false}, 00:30:02.935 "ddgst": ${ddgst:-false} 00:30:02.935 }, 00:30:02.935 "method": "bdev_nvme_attach_controller" 00:30:02.935 } 00:30:02.935 EOF 00:30:02.935 )") 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:30:02.935 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:02.935 "params": { 00:30:02.935 "name": "Nvme0", 00:30:02.935 "trtype": "tcp", 00:30:02.935 "traddr": "10.0.0.2", 00:30:02.935 "adrfam": "ipv4", 00:30:02.935 "trsvcid": "4420", 00:30:02.935 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:02.935 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:02.935 "hdgst": false, 00:30:02.935 "ddgst": false 00:30:02.935 }, 00:30:02.935 "method": "bdev_nvme_attach_controller" 00:30:02.935 }' 00:30:02.935 [2024-10-08 18:37:56.167668] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:30:02.935 [2024-10-08 18:37:56.167719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid608192 ] 00:30:02.935 [2024-10-08 18:37:56.239299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.193 [2024-10-08 18:37:56.312538] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.193 Running I/O for 10 seconds... 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1155 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1155 -ge 100 ']' 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.761 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.761 [2024-10-08 18:37:57.080424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.761 [2024-10-08 18:37:57.080461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.761 [2024-10-08 18:37:57.080477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.761 [2024-10-08 18:37:57.080485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.761 [2024-10-08 18:37:57.080496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.761 [2024-10-08 18:37:57.080503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.761 [2024-10-08 18:37:57.080511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.761 [2024-10-08 18:37:57.080518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.761 [2024-10-08 18:37:57.080531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.761 [2024-10-08 18:37:57.080538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.761 [2024-10-08 18:37:57.080547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.761 [2024-10-08 18:37:57.080553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.761 [2024-10-08 18:37:57.080561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.761 [2024-10-08 18:37:57.080568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.761 [2024-10-08 18:37:57.080576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.761 [2024-10-08 18:37:57.080583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.761 [2024-10-08 18:37:57.080591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.761 [2024-10-08 18:37:57.080597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.761 [2024-10-08 18:37:57.080605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.761 [2024-10-08 18:37:57.080612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.761 [2024-10-08 18:37:57.080620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.761 [2024-10-08 18:37:57.080627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.761 [2024-10-08 18:37:57.080634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.080988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.080996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.081011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.081025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.081040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.081054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.081069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.081083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.081099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.081113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.081128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.081142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.081157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.081171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.081185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.081204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.081219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.081234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.762 [2024-10-08 18:37:57.081248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-10-08 18:37:57.081254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.763 [2024-10-08 18:37:57.081262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-10-08 18:37:57.081269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.763 [2024-10-08 18:37:57.081278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-10-08 18:37:57.081285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.763 [2024-10-08 18:37:57.081293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-10-08 18:37:57.081299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.763 [2024-10-08 18:37:57.081307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-10-08 18:37:57.081314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.763 [2024-10-08 18:37:57.081322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-10-08 18:37:57.081328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.763 [2024-10-08 18:37:57.081336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-10-08 18:37:57.081343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.763 [2024-10-08 18:37:57.081351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-10-08 18:37:57.081357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.763 [2024-10-08 18:37:57.081365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-10-08 18:37:57.081372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.763 [2024-10-08 18:37:57.081384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-10-08 18:37:57.081391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.763 [2024-10-08 18:37:57.081399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-10-08 18:37:57.081405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.763 [2024-10-08 18:37:57.081413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-10-08 18:37:57.081419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.763 [2024-10-08 18:37:57.081483] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x159a8e0 was disconnected and freed. reset controller. 00:30:04.022 [2024-10-08 18:37:57.082383] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:04.022 task offset: 26880 on job bdev=Nvme0n1 fails 00:30:04.022 00:30:04.022 Latency(us) 00:30:04.022 [2024-10-08T16:37:57.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.022 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:04.022 Job: Nvme0n1 ended in about 0.61 seconds with error 00:30:04.022 Verification LBA range: start 0x0 length 0x400 00:30:04.022 Nvme0n1 : 0.61 1977.74 123.61 104.09 0.00 30125.22 1513.57 26588.89 00:30:04.022 [2024-10-08T16:37:57.345Z] =================================================================================================================== 00:30:04.022 [2024-10-08T16:37:57.345Z] Total : 1977.74 123.61 104.09 0.00 30125.22 1513.57 26588.89 00:30:04.022 [2024-10-08 18:37:57.084744] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:04.022 [2024-10-08 18:37:57.084767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13815c0 (9): Bad file descriptor 00:30:04.022 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.022 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:04.022 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.022 [2024-10-08 18:37:57.085806] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:04.022 [2024-10-08 18:37:57.085876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:04.022 [2024-10-08 18:37:57.085898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.022 [2024-10-08 18:37:57.085911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:04.022 [2024-10-08 18:37:57.085918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:04.022 [2024-10-08 18:37:57.085925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.022 [2024-10-08 18:37:57.085932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13815c0 00:30:04.022 [2024-10-08 18:37:57.085950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13815c0 (9): Bad file descriptor 00:30:04.022 [2024-10-08 18:37:57.085962] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:04.022 [2024-10-08 18:37:57.085969] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:04.022 [2024-10-08 18:37:57.085977] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:04.022 [2024-10-08 18:37:57.085989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:04.022 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.022 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.022 18:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:04.958 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 608192 00:30:04.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (608192) - No such process 00:30:04.958 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:04.959 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:04.959 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:04.959 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:04.959 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:30:04.959 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:30:04.959 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:04.959 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:04.959 { 00:30:04.959 "params": { 00:30:04.959 "name": "Nvme$subsystem", 00:30:04.959 "trtype": "$TEST_TRANSPORT", 00:30:04.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.959 "adrfam": "ipv4", 00:30:04.959 "trsvcid": "$NVMF_PORT", 00:30:04.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.959 "hdgst": ${hdgst:-false}, 00:30:04.959 "ddgst": ${ddgst:-false} 00:30:04.959 }, 00:30:04.959 "method": "bdev_nvme_attach_controller" 00:30:04.959 } 00:30:04.959 EOF 00:30:04.959 )") 00:30:04.959 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:30:04.959 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:30:04.959 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:30:04.959 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:04.959 "params": { 00:30:04.959 "name": "Nvme0", 00:30:04.959 "trtype": "tcp", 00:30:04.959 "traddr": "10.0.0.2", 00:30:04.959 "adrfam": "ipv4", 00:30:04.959 "trsvcid": "4420", 00:30:04.959 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:04.959 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:04.959 "hdgst": false, 00:30:04.959 "ddgst": false 00:30:04.959 }, 00:30:04.959 "method": "bdev_nvme_attach_controller" 00:30:04.959 }' 00:30:04.959 [2024-10-08 18:37:58.146663] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:30:04.959 [2024-10-08 18:37:58.146711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid608641 ] 00:30:04.959 [2024-10-08 18:37:58.212874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.216 [2024-10-08 18:37:58.283432] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.473 Running I/O for 1 seconds... 00:30:06.408 2008.00 IOPS, 125.50 MiB/s 00:30:06.408 Latency(us) 00:30:06.408 [2024-10-08T16:37:59.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.408 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:06.408 Verification LBA range: start 0x0 length 0x400 00:30:06.408 Nvme0n1 : 1.01 2052.80 128.30 0.00 0.00 30586.67 1973.88 27337.87 00:30:06.408 [2024-10-08T16:37:59.731Z] =================================================================================================================== 00:30:06.408 [2024-10-08T16:37:59.731Z] Total : 2052.80 128.30 0.00 0.00 30586.67 1973.88 27337.87 00:30:06.667 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:06.667 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:06.667 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:06.667 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:06.667 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:06.667 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:06.667 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:06.667 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:06.667 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:06.667 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:06.667 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:06.667 rmmod nvme_tcp 00:30:06.667 rmmod nvme_fabrics 00:30:06.667 rmmod nvme_keyring 00:30:06.667 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:06.667 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:06.667 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:06.667 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 608124 ']' 00:30:06.667 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 608124 00:30:06.668 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 608124 ']' 00:30:06.668 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 608124 00:30:06.668 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:30:06.668 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:06.668 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 608124 00:30:06.668 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:06.668 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:06.668 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 608124' 00:30:06.668 killing process with pid 608124 00:30:06.668 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 608124 00:30:06.668 18:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 608124 00:30:06.927 [2024-10-08 18:38:00.123589] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:06.927 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:06.927 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:06.927 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:06.927 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:06.927 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:30:06.927 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:06.927 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:30:06.927 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:06.927 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:06.927 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.927 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.927 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:09.461 00:30:09.461 real 0m13.295s 00:30:09.461 user 0m19.602s 00:30:09.461 sys 0m6.540s 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:09.461 ************************************ 00:30:09.461 END TEST nvmf_host_management 00:30:09.461 ************************************ 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:09.461 ************************************ 00:30:09.461 START TEST nvmf_lvol 00:30:09.461 ************************************ 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:09.461 * Looking for test storage... 00:30:09.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:09.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.461 --rc genhtml_branch_coverage=1 00:30:09.461 --rc genhtml_function_coverage=1 00:30:09.461 --rc genhtml_legend=1 00:30:09.461 --rc geninfo_all_blocks=1 00:30:09.461 --rc geninfo_unexecuted_blocks=1 00:30:09.461 00:30:09.461 ' 00:30:09.461 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:09.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.461 --rc genhtml_branch_coverage=1 00:30:09.461 --rc genhtml_function_coverage=1 00:30:09.461 --rc genhtml_legend=1 00:30:09.461 --rc geninfo_all_blocks=1 00:30:09.461 --rc geninfo_unexecuted_blocks=1 00:30:09.461 00:30:09.461 ' 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:09.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.462 --rc genhtml_branch_coverage=1 00:30:09.462 --rc genhtml_function_coverage=1 00:30:09.462 --rc genhtml_legend=1 00:30:09.462 --rc geninfo_all_blocks=1 00:30:09.462 --rc geninfo_unexecuted_blocks=1 00:30:09.462 00:30:09.462 ' 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:09.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.462 --rc genhtml_branch_coverage=1 00:30:09.462 --rc genhtml_function_coverage=1 00:30:09.462 --rc genhtml_legend=1 00:30:09.462 --rc geninfo_all_blocks=1 00:30:09.462 --rc geninfo_unexecuted_blocks=1 00:30:09.462 00:30:09.462 ' 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:09.462 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:16.075 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:16.075 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:16.075 Found net devices under 0000:86:00.0: cvl_0_0 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:16.075 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:16.076 Found net devices under 0000:86:00.1: cvl_0_1 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:16.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:30:16.076 00:30:16.076 --- 10.0.0.2 ping statistics --- 00:30:16.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.076 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:30:16.076 00:30:16.076 --- 10.0.0.1 ping statistics --- 00:30:16.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.076 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=612406 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 612406 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 612406 ']' 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:16.076 18:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:16.076 [2024-10-08 18:38:08.492246] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:16.076 [2024-10-08 18:38:08.493197] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:30:16.076 [2024-10-08 18:38:08.493233] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.076 [2024-10-08 18:38:08.563862] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:16.076 [2024-10-08 18:38:08.642022] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.076 [2024-10-08 18:38:08.642058] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.076 [2024-10-08 18:38:08.642065] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.076 [2024-10-08 18:38:08.642071] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.076 [2024-10-08 18:38:08.642076] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.076 [2024-10-08 18:38:08.643034] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.076 [2024-10-08 18:38:08.643073] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.076 [2024-10-08 18:38:08.643074] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.076 [2024-10-08 18:38:08.718183] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:16.076 [2024-10-08 18:38:08.718277] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:16.076 [2024-10-08 18:38:08.718702] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:16.076 [2024-10-08 18:38:08.718928] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:16.076 18:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:16.076 18:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:30:16.076 18:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:16.076 18:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:16.076 18:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:16.076 18:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.076 18:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:16.361 [2024-10-08 18:38:09.539928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.361 18:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:16.619 18:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:16.619 18:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:16.878 18:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:16.878 18:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:17.137 18:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:17.137 18:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0a9ffb21-f94f-4d48-b705-cfab43daf8d9 00:30:17.137 18:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0a9ffb21-f94f-4d48-b705-cfab43daf8d9 lvol 20 00:30:17.396 18:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ba2ea6e1-836a-4103-828a-ce840142deb3 00:30:17.396 18:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:17.654 18:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ba2ea6e1-836a-4103-828a-ce840142deb3 00:30:17.912 18:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:17.913 [2024-10-08 18:38:11.159802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.913 18:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:18.171 18:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=612900 00:30:18.171 18:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:18.171 18:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:19.108 18:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ba2ea6e1-836a-4103-828a-ce840142deb3 MY_SNAPSHOT 00:30:19.367 18:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=360b7e44-ee7f-4238-b35f-8160cb6fed20 00:30:19.367 18:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ba2ea6e1-836a-4103-828a-ce840142deb3 30 00:30:19.625 18:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 360b7e44-ee7f-4238-b35f-8160cb6fed20 MY_CLONE 00:30:19.884 18:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=32a982e1-8098-4eec-9f9a-925c3b754372 00:30:19.884 18:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 32a982e1-8098-4eec-9f9a-925c3b754372 00:30:20.453 18:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 612900 00:30:28.571 Initializing NVMe Controllers 00:30:28.571 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:28.571 Controller IO queue size 128, less than required. 00:30:28.571 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:28.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:28.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:28.571 Initialization complete. Launching workers. 00:30:28.571 ======================================================== 00:30:28.571 Latency(us) 00:30:28.571 Device Information : IOPS MiB/s Average min max 00:30:28.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12389.50 48.40 10332.67 1553.12 70348.02 00:30:28.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12572.30 49.11 10185.26 4152.59 57672.94 00:30:28.571 ======================================================== 00:30:28.571 Total : 24961.80 97.51 10258.43 1553.12 70348.02 00:30:28.571 00:30:28.571 18:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:28.830 18:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ba2ea6e1-836a-4103-828a-ce840142deb3 00:30:29.089 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0a9ffb21-f94f-4d48-b705-cfab43daf8d9 00:30:29.089 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:29.089 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:29.089 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:29.089 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:29.089 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:29.089 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:29.089 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:29.089 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:29.089 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:29.089 rmmod nvme_tcp 00:30:29.089 rmmod nvme_fabrics 00:30:29.089 rmmod nvme_keyring 00:30:29.348 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:29.348 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:29.348 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:29.348 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 612406 ']' 00:30:29.348 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 612406 00:30:29.348 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 612406 ']' 00:30:29.348 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 612406 00:30:29.348 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:30:29.348 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:29.348 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 612406 00:30:29.348 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:29.348 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:29.348 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 612406' 00:30:29.348 killing process with pid 612406 00:30:29.348 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 612406 00:30:29.348 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 612406 00:30:29.609 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:29.609 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:29.609 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:29.609 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:29.609 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:30:29.609 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:29.609 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:30:29.609 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:29.609 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:29.609 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.609 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.609 18:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.517 18:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:31.517 00:30:31.517 real 0m22.486s 00:30:31.517 user 0m55.783s 00:30:31.517 sys 0m9.915s 00:30:31.517 18:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:31.517 18:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:31.517 ************************************ 00:30:31.517 END TEST nvmf_lvol 00:30:31.517 ************************************ 00:30:31.517 18:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:31.517 18:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:31.517 18:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:31.517 18:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:31.777 ************************************ 00:30:31.777 START TEST nvmf_lvs_grow 00:30:31.777 ************************************ 00:30:31.777 18:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:31.777 * Looking for test storage... 00:30:31.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:31.777 18:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:31.777 18:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:30:31.777 18:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:31.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.777 --rc genhtml_branch_coverage=1 00:30:31.777 --rc genhtml_function_coverage=1 00:30:31.777 --rc genhtml_legend=1 00:30:31.777 --rc geninfo_all_blocks=1 00:30:31.777 --rc geninfo_unexecuted_blocks=1 00:30:31.777 00:30:31.777 ' 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:31.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.777 --rc genhtml_branch_coverage=1 00:30:31.777 --rc genhtml_function_coverage=1 00:30:31.777 --rc genhtml_legend=1 00:30:31.777 --rc geninfo_all_blocks=1 00:30:31.777 --rc geninfo_unexecuted_blocks=1 00:30:31.777 00:30:31.777 ' 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:31.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.777 --rc genhtml_branch_coverage=1 00:30:31.777 --rc genhtml_function_coverage=1 00:30:31.777 --rc genhtml_legend=1 00:30:31.777 --rc geninfo_all_blocks=1 00:30:31.777 --rc geninfo_unexecuted_blocks=1 00:30:31.777 00:30:31.777 ' 00:30:31.777 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:31.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.777 --rc genhtml_branch_coverage=1 00:30:31.777 --rc genhtml_function_coverage=1 00:30:31.777 --rc genhtml_legend=1 00:30:31.777 --rc geninfo_all_blocks=1 00:30:31.778 --rc geninfo_unexecuted_blocks=1 00:30:31.778 00:30:31.778 ' 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:31.778 18:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:38.498 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:38.498 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.498 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:38.499 Found net devices under 0000:86:00.0: cvl_0_0 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:38.499 Found net devices under 0000:86:00.1: cvl_0_1 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:38.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:30:38.499 00:30:38.499 --- 10.0.0.2 ping statistics --- 00:30:38.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.499 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:38.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:30:38.499 00:30:38.499 --- 10.0.0.1 ping statistics --- 00:30:38.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.499 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=618057 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 618057 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 618057 ']' 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:38.499 18:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:38.499 [2024-10-08 18:38:31.035177] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:38.499 [2024-10-08 18:38:31.036160] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:30:38.499 [2024-10-08 18:38:31.036196] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.499 [2024-10-08 18:38:31.107213] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.499 [2024-10-08 18:38:31.185685] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:38.499 [2024-10-08 18:38:31.185722] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:38.499 [2024-10-08 18:38:31.185730] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:38.499 [2024-10-08 18:38:31.185735] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:38.499 [2024-10-08 18:38:31.185741] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:38.499 [2024-10-08 18:38:31.186281] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.499 [2024-10-08 18:38:31.253474] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:38.499 [2024-10-08 18:38:31.253695] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:38.761 18:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:38.761 18:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:30:38.761 18:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:38.761 18:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:38.761 18:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:38.761 18:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:38.761 18:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:39.019 [2024-10-08 18:38:32.086958] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.019 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:39.019 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:39.019 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:39.019 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:39.019 ************************************ 00:30:39.019 START TEST lvs_grow_clean 00:30:39.019 ************************************ 00:30:39.019 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:30:39.019 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:39.019 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:39.019 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:39.020 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:39.020 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:39.020 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:39.020 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:39.020 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:39.020 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:39.278 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:39.278 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:39.278 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c4ca80eb-e8d4-4eab-b309-9a3fc4c2a170 00:30:39.278 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ca80eb-e8d4-4eab-b309-9a3fc4c2a170 00:30:39.278 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:39.537 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:39.537 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:39.537 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c4ca80eb-e8d4-4eab-b309-9a3fc4c2a170 lvol 150 00:30:39.795 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9961ef49-d4d2-4a31-a530-68a0e8f02634 00:30:39.795 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:39.795 18:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:40.054 [2024-10-08 18:38:33.126681] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:40.054 [2024-10-08 18:38:33.126808] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:40.054 true 00:30:40.054 18:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ca80eb-e8d4-4eab-b309-9a3fc4c2a170 00:30:40.054 18:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:40.054 18:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:40.054 18:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:40.313 18:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9961ef49-d4d2-4a31-a530-68a0e8f02634 00:30:40.572 18:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:40.572 [2024-10-08 18:38:33.851124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.572 18:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:40.830 18:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=618665 00:30:40.830 18:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:40.831 18:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:40.831 18:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 618665 /var/tmp/bdevperf.sock 00:30:40.831 18:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 618665 ']' 00:30:40.831 18:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:40.831 18:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:40.831 18:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:40.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:40.831 18:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:40.831 18:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:40.831 [2024-10-08 18:38:34.114310] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:30:40.831 [2024-10-08 18:38:34.114361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618665 ] 00:30:41.089 [2024-10-08 18:38:34.180189] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.089 [2024-10-08 18:38:34.257915] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.657 18:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:41.657 18:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:30:41.657 18:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:41.916 Nvme0n1 00:30:41.916 18:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:42.175 [ 00:30:42.175 { 00:30:42.175 "name": "Nvme0n1", 00:30:42.175 "aliases": [ 00:30:42.175 "9961ef49-d4d2-4a31-a530-68a0e8f02634" 00:30:42.175 ], 00:30:42.175 "product_name": "NVMe disk", 00:30:42.175 "block_size": 4096, 00:30:42.175 "num_blocks": 38912, 00:30:42.175 "uuid": "9961ef49-d4d2-4a31-a530-68a0e8f02634", 00:30:42.175 "numa_id": 1, 00:30:42.175 "assigned_rate_limits": { 00:30:42.175 "rw_ios_per_sec": 0, 00:30:42.175 "rw_mbytes_per_sec": 0, 00:30:42.175 "r_mbytes_per_sec": 0, 00:30:42.175 "w_mbytes_per_sec": 0 00:30:42.175 }, 00:30:42.175 "claimed": false, 00:30:42.175 "zoned": false, 00:30:42.175 "supported_io_types": { 00:30:42.175 "read": true, 00:30:42.175 "write": true, 00:30:42.175 "unmap": true, 00:30:42.175 "flush": true, 00:30:42.175 "reset": true, 00:30:42.175 "nvme_admin": true, 00:30:42.175 "nvme_io": true, 00:30:42.175 "nvme_io_md": false, 00:30:42.175 "write_zeroes": true, 00:30:42.175 "zcopy": false, 00:30:42.175 "get_zone_info": false, 00:30:42.175 "zone_management": false, 00:30:42.175 "zone_append": false, 00:30:42.175 "compare": true, 00:30:42.175 "compare_and_write": true, 00:30:42.175 "abort": true, 00:30:42.175 "seek_hole": false, 00:30:42.175 "seek_data": false, 00:30:42.175 "copy": true, 00:30:42.175 "nvme_iov_md": false 00:30:42.175 }, 00:30:42.175 "memory_domains": [ 00:30:42.175 { 00:30:42.175 "dma_device_id": "system", 00:30:42.175 "dma_device_type": 1 00:30:42.175 } 00:30:42.175 ], 00:30:42.175 "driver_specific": { 00:30:42.175 "nvme": [ 00:30:42.175 { 00:30:42.175 "trid": { 00:30:42.175 "trtype": "TCP", 00:30:42.175 "adrfam": "IPv4", 00:30:42.175 "traddr": "10.0.0.2", 00:30:42.175 "trsvcid": "4420", 00:30:42.175 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:42.175 }, 00:30:42.175 "ctrlr_data": { 00:30:42.175 "cntlid": 1, 00:30:42.175 "vendor_id": "0x8086", 00:30:42.175 "model_number": "SPDK bdev Controller", 00:30:42.175 "serial_number": "SPDK0", 00:30:42.175 "firmware_revision": "25.01", 00:30:42.175 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:42.175 "oacs": { 00:30:42.175 "security": 0, 00:30:42.175 "format": 0, 00:30:42.175 "firmware": 0, 00:30:42.175 "ns_manage": 0 00:30:42.175 }, 00:30:42.175 "multi_ctrlr": true, 00:30:42.175 "ana_reporting": false 00:30:42.175 }, 00:30:42.175 "vs": { 00:30:42.175 "nvme_version": "1.3" 00:30:42.175 }, 00:30:42.175 "ns_data": { 00:30:42.175 "id": 1, 00:30:42.175 "can_share": true 00:30:42.175 } 00:30:42.175 } 00:30:42.175 ], 00:30:42.175 "mp_policy": "active_passive" 00:30:42.175 } 00:30:42.175 } 00:30:42.175 ] 00:30:42.175 18:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:42.175 18:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=618833 00:30:42.175 18:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:42.175 Running I/O for 10 seconds... 00:30:43.553 Latency(us) 00:30:43.553 [2024-10-08T16:38:36.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:43.553 Nvme0n1 : 1.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:30:43.553 [2024-10-08T16:38:36.876Z] =================================================================================================================== 00:30:43.553 [2024-10-08T16:38:36.876Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:30:43.553 00:30:44.119 18:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c4ca80eb-e8d4-4eab-b309-9a3fc4c2a170 00:30:44.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:44.377 Nvme0n1 : 2.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:30:44.377 [2024-10-08T16:38:37.700Z] =================================================================================================================== 00:30:44.377 [2024-10-08T16:38:37.700Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:30:44.377 00:30:44.377 true 00:30:44.377 18:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ca80eb-e8d4-4eab-b309-9a3fc4c2a170 00:30:44.377 18:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:44.635 18:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:44.635 18:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:44.635 18:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 618833 00:30:45.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:45.202 Nvme0n1 : 3.00 23198.67 90.62 0.00 0.00 0.00 0.00 0.00 00:30:45.202 [2024-10-08T16:38:38.525Z] =================================================================================================================== 00:30:45.202 [2024-10-08T16:38:38.525Z] Total : 23198.67 90.62 0.00 0.00 0.00 0.00 0.00 00:30:45.202 00:30:46.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:46.578 Nvme0n1 : 4.00 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:30:46.578 [2024-10-08T16:38:39.901Z] =================================================================================================================== 00:30:46.578 [2024-10-08T16:38:39.901Z] Total : 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:30:46.578 00:30:47.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:47.146 Nvme0n1 : 5.00 23317.20 91.08 0.00 0.00 0.00 0.00 0.00 00:30:47.146 [2024-10-08T16:38:40.469Z] =================================================================================================================== 00:30:47.146 [2024-10-08T16:38:40.469Z] Total : 23317.20 91.08 0.00 0.00 0.00 0.00 0.00 00:30:47.146 00:30:48.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:48.521 Nvme0n1 : 6.00 23331.33 91.14 0.00 0.00 0.00 0.00 0.00 00:30:48.521 [2024-10-08T16:38:41.844Z] =================================================================================================================== 00:30:48.521 [2024-10-08T16:38:41.844Z] Total : 23331.33 91.14 0.00 0.00 0.00 0.00 0.00 00:30:48.521 00:30:49.460 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:49.460 Nvme0n1 : 7.00 23391.00 91.37 0.00 0.00 0.00 0.00 0.00 00:30:49.460 [2024-10-08T16:38:42.783Z] =================================================================================================================== 00:30:49.460 [2024-10-08T16:38:42.783Z] Total : 23391.00 91.37 0.00 0.00 0.00 0.00 0.00 00:30:49.460 00:30:50.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:50.395 Nvme0n1 : 8.00 23419.88 91.48 0.00 0.00 0.00 0.00 0.00 00:30:50.395 [2024-10-08T16:38:43.718Z] =================================================================================================================== 00:30:50.395 [2024-10-08T16:38:43.718Z] Total : 23419.88 91.48 0.00 0.00 0.00 0.00 0.00 00:30:50.395 00:30:51.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:51.378 Nvme0n1 : 9.00 23442.33 91.57 0.00 0.00 0.00 0.00 0.00 00:30:51.378 [2024-10-08T16:38:44.701Z] =================================================================================================================== 00:30:51.378 [2024-10-08T16:38:44.701Z] Total : 23442.33 91.57 0.00 0.00 0.00 0.00 0.00 00:30:51.378 00:30:52.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:52.313 Nvme0n1 : 10.00 23473.00 91.69 0.00 0.00 0.00 0.00 0.00 00:30:52.313 [2024-10-08T16:38:45.636Z] =================================================================================================================== 00:30:52.313 [2024-10-08T16:38:45.636Z] Total : 23473.00 91.69 0.00 0.00 0.00 0.00 0.00 00:30:52.313 00:30:52.313 00:30:52.313 Latency(us) 00:30:52.313 [2024-10-08T16:38:45.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:52.313 Nvme0n1 : 10.00 23478.35 91.71 0.00 0.00 5449.01 3276.80 25715.08 00:30:52.313 [2024-10-08T16:38:45.636Z] =================================================================================================================== 00:30:52.313 [2024-10-08T16:38:45.636Z] Total : 23478.35 91.71 0.00 0.00 5449.01 3276.80 25715.08 00:30:52.313 { 00:30:52.313 "results": [ 00:30:52.313 { 00:30:52.313 "job": "Nvme0n1", 00:30:52.313 "core_mask": "0x2", 00:30:52.313 "workload": "randwrite", 00:30:52.313 "status": "finished", 00:30:52.313 "queue_depth": 128, 00:30:52.313 "io_size": 4096, 00:30:52.313 "runtime": 10.003174, 00:30:52.313 "iops": 23478.347972353575, 00:30:52.313 "mibps": 91.71229676700615, 00:30:52.313 "io_failed": 0, 00:30:52.313 "io_timeout": 0, 00:30:52.313 "avg_latency_us": 5449.005547798081, 00:30:52.313 "min_latency_us": 3276.8, 00:30:52.313 "max_latency_us": 25715.078095238096 00:30:52.313 } 00:30:52.313 ], 00:30:52.313 "core_count": 1 00:30:52.313 } 00:30:52.313 18:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 618665 00:30:52.313 18:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 618665 ']' 00:30:52.313 18:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 618665 00:30:52.313 18:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:30:52.313 18:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:52.313 18:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 618665 00:30:52.313 18:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:52.313 18:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:52.313 18:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 618665' 00:30:52.313 killing process with pid 618665 00:30:52.313 18:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 618665 00:30:52.313 Received shutdown signal, test time was about 10.000000 seconds 00:30:52.313 00:30:52.313 Latency(us) 00:30:52.313 [2024-10-08T16:38:45.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.313 [2024-10-08T16:38:45.636Z] =================================================================================================================== 00:30:52.313 [2024-10-08T16:38:45.636Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:52.313 18:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 618665 00:30:52.572 18:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:52.831 18:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:52.831 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ca80eb-e8d4-4eab-b309-9a3fc4c2a170 00:30:52.831 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:53.090 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:53.090 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:53.090 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:53.349 [2024-10-08 18:38:46.502773] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:53.349 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ca80eb-e8d4-4eab-b309-9a3fc4c2a170 00:30:53.349 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:30:53.349 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ca80eb-e8d4-4eab-b309-9a3fc4c2a170 00:30:53.349 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.349 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:53.349 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.349 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:53.349 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.349 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:53.349 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.349 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:53.349 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ca80eb-e8d4-4eab-b309-9a3fc4c2a170 00:30:53.608 request: 00:30:53.608 { 00:30:53.608 "uuid": "c4ca80eb-e8d4-4eab-b309-9a3fc4c2a170", 00:30:53.608 "method": "bdev_lvol_get_lvstores", 00:30:53.608 "req_id": 1 00:30:53.608 } 00:30:53.608 Got JSON-RPC error response 00:30:53.608 response: 00:30:53.608 { 00:30:53.608 "code": -19, 00:30:53.608 "message": "No such device" 00:30:53.608 } 00:30:53.608 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:30:53.608 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:53.608 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:53.608 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:53.608 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:53.608 aio_bdev 00:30:53.867 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9961ef49-d4d2-4a31-a530-68a0e8f02634 00:30:53.867 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=9961ef49-d4d2-4a31-a530-68a0e8f02634 00:30:53.867 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:53.867 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:30:53.867 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:53.867 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:53.867 18:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:53.867 18:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9961ef49-d4d2-4a31-a530-68a0e8f02634 -t 2000 00:30:54.126 [ 00:30:54.126 { 00:30:54.126 "name": "9961ef49-d4d2-4a31-a530-68a0e8f02634", 00:30:54.126 "aliases": [ 00:30:54.126 "lvs/lvol" 00:30:54.126 ], 00:30:54.126 "product_name": "Logical Volume", 00:30:54.126 "block_size": 4096, 00:30:54.126 "num_blocks": 38912, 00:30:54.126 "uuid": "9961ef49-d4d2-4a31-a530-68a0e8f02634", 00:30:54.126 "assigned_rate_limits": { 00:30:54.126 "rw_ios_per_sec": 0, 00:30:54.126 "rw_mbytes_per_sec": 0, 00:30:54.126 "r_mbytes_per_sec": 0, 00:30:54.126 "w_mbytes_per_sec": 0 00:30:54.126 }, 00:30:54.126 "claimed": false, 00:30:54.126 "zoned": false, 00:30:54.126 "supported_io_types": { 00:30:54.126 "read": true, 00:30:54.126 "write": true, 00:30:54.126 "unmap": true, 00:30:54.126 "flush": false, 00:30:54.126 "reset": true, 00:30:54.126 "nvme_admin": false, 00:30:54.126 "nvme_io": false, 00:30:54.126 "nvme_io_md": false, 00:30:54.126 "write_zeroes": true, 00:30:54.126 "zcopy": false, 00:30:54.126 "get_zone_info": false, 00:30:54.126 "zone_management": false, 00:30:54.126 "zone_append": false, 00:30:54.126 "compare": false, 00:30:54.126 "compare_and_write": false, 00:30:54.126 "abort": false, 00:30:54.126 "seek_hole": true, 00:30:54.126 "seek_data": true, 00:30:54.126 "copy": false, 00:30:54.126 "nvme_iov_md": false 00:30:54.126 }, 00:30:54.126 "driver_specific": { 00:30:54.126 "lvol": { 00:30:54.126 "lvol_store_uuid": "c4ca80eb-e8d4-4eab-b309-9a3fc4c2a170", 00:30:54.126 "base_bdev": "aio_bdev", 00:30:54.126 "thin_provision": false, 00:30:54.126 "num_allocated_clusters": 38, 00:30:54.126 "snapshot": false, 00:30:54.126 "clone": false, 00:30:54.126 "esnap_clone": false 00:30:54.126 } 00:30:54.126 } 00:30:54.126 } 00:30:54.126 ] 00:30:54.126 18:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:30:54.126 18:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ca80eb-e8d4-4eab-b309-9a3fc4c2a170 00:30:54.126 18:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:54.384 18:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:54.384 18:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:54.384 18:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ca80eb-e8d4-4eab-b309-9a3fc4c2a170 00:30:54.384 18:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:54.384 18:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9961ef49-d4d2-4a31-a530-68a0e8f02634 00:30:54.642 18:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c4ca80eb-e8d4-4eab-b309-9a3fc4c2a170 00:30:54.901 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:55.159 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:55.159 00:30:55.159 real 0m16.146s 00:30:55.159 user 0m15.848s 00:30:55.159 sys 0m1.496s 00:30:55.159 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:55.159 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:55.159 ************************************ 00:30:55.159 END TEST lvs_grow_clean 00:30:55.159 ************************************ 00:30:55.159 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:55.159 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:55.159 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:55.160 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:55.160 ************************************ 00:30:55.160 START TEST lvs_grow_dirty 00:30:55.160 ************************************ 00:30:55.160 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:30:55.160 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:55.160 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:55.160 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:55.160 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:55.160 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:55.160 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:55.160 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:55.160 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:55.160 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:55.419 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:55.419 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:55.678 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5d939913-82b2-44d4-b435-737ca9ef88d0 00:30:55.678 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d939913-82b2-44d4-b435-737ca9ef88d0 00:30:55.678 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:55.936 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:55.936 18:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:55.936 18:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5d939913-82b2-44d4-b435-737ca9ef88d0 lvol 150 00:30:55.936 18:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=61e435e0-9bfd-452d-b740-37858f92414e 00:30:55.936 18:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:55.936 18:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:56.195 [2024-10-08 18:38:49.362672] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:56.195 [2024-10-08 18:38:49.362794] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:56.195 true 00:30:56.195 18:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d939913-82b2-44d4-b435-737ca9ef88d0 00:30:56.195 18:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:56.453 18:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:56.453 18:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:56.453 18:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 61e435e0-9bfd-452d-b740-37858f92414e 00:30:56.711 18:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:56.969 [2024-10-08 18:38:50.135119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:56.969 18:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:57.228 18:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=621344 00:30:57.228 18:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:57.228 18:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:57.228 18:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 621344 /var/tmp/bdevperf.sock 00:30:57.228 18:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 621344 ']' 00:30:57.228 18:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:57.228 18:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:57.228 18:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:57.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:57.228 18:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:57.228 18:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:57.228 [2024-10-08 18:38:50.391595] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:30:57.228 [2024-10-08 18:38:50.391646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621344 ] 00:30:57.228 [2024-10-08 18:38:50.459740] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.228 [2024-10-08 18:38:50.538410] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.164 18:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:58.164 18:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:30:58.164 18:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:58.423 Nvme0n1 00:30:58.423 18:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:58.423 [ 00:30:58.423 { 00:30:58.423 "name": "Nvme0n1", 00:30:58.423 "aliases": [ 00:30:58.423 "61e435e0-9bfd-452d-b740-37858f92414e" 00:30:58.423 ], 00:30:58.423 "product_name": "NVMe disk", 00:30:58.423 "block_size": 4096, 00:30:58.423 "num_blocks": 38912, 00:30:58.423 "uuid": "61e435e0-9bfd-452d-b740-37858f92414e", 00:30:58.423 "numa_id": 1, 00:30:58.423 "assigned_rate_limits": { 00:30:58.423 "rw_ios_per_sec": 0, 00:30:58.423 "rw_mbytes_per_sec": 0, 00:30:58.423 "r_mbytes_per_sec": 0, 00:30:58.423 "w_mbytes_per_sec": 0 00:30:58.423 }, 00:30:58.423 "claimed": false, 00:30:58.423 "zoned": false, 00:30:58.423 "supported_io_types": { 00:30:58.423 "read": true, 00:30:58.423 "write": true, 00:30:58.423 "unmap": true, 00:30:58.423 "flush": true, 00:30:58.423 "reset": true, 00:30:58.423 "nvme_admin": true, 00:30:58.423 "nvme_io": true, 00:30:58.423 "nvme_io_md": false, 00:30:58.423 "write_zeroes": true, 00:30:58.423 "zcopy": false, 00:30:58.423 "get_zone_info": false, 00:30:58.423 "zone_management": false, 00:30:58.423 "zone_append": false, 00:30:58.423 "compare": true, 00:30:58.423 "compare_and_write": true, 00:30:58.423 "abort": true, 00:30:58.423 "seek_hole": false, 00:30:58.423 "seek_data": false, 00:30:58.423 "copy": true, 00:30:58.423 "nvme_iov_md": false 00:30:58.423 }, 00:30:58.423 "memory_domains": [ 00:30:58.423 { 00:30:58.423 "dma_device_id": "system", 00:30:58.423 "dma_device_type": 1 00:30:58.423 } 00:30:58.423 ], 00:30:58.423 "driver_specific": { 00:30:58.423 "nvme": [ 00:30:58.423 { 00:30:58.423 "trid": { 00:30:58.423 "trtype": "TCP", 00:30:58.423 "adrfam": "IPv4", 00:30:58.423 "traddr": "10.0.0.2", 00:30:58.423 "trsvcid": "4420", 00:30:58.423 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:58.423 }, 00:30:58.423 "ctrlr_data": { 00:30:58.423 "cntlid": 1, 00:30:58.423 "vendor_id": "0x8086", 00:30:58.423 "model_number": "SPDK bdev Controller", 00:30:58.423 "serial_number": "SPDK0", 00:30:58.423 "firmware_revision": "25.01", 00:30:58.423 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:58.423 "oacs": { 00:30:58.423 "security": 0, 00:30:58.423 "format": 0, 00:30:58.423 "firmware": 0, 00:30:58.423 "ns_manage": 0 00:30:58.423 }, 00:30:58.423 "multi_ctrlr": true, 00:30:58.423 "ana_reporting": false 00:30:58.423 }, 00:30:58.423 "vs": { 00:30:58.423 "nvme_version": "1.3" 00:30:58.423 }, 00:30:58.423 "ns_data": { 00:30:58.423 "id": 1, 00:30:58.423 "can_share": true 00:30:58.423 } 00:30:58.423 } 00:30:58.423 ], 00:30:58.423 "mp_policy": "active_passive" 00:30:58.423 } 00:30:58.423 } 00:30:58.423 ] 00:30:58.423 18:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=621575 00:30:58.423 18:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:58.423 18:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:58.681 Running I/O for 10 seconds... 00:30:59.617 Latency(us) 00:30:59.617 [2024-10-08T16:38:52.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:59.617 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:30:59.617 [2024-10-08T16:38:52.940Z] =================================================================================================================== 00:30:59.617 [2024-10-08T16:38:52.940Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:30:59.617 00:31:00.552 18:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5d939913-82b2-44d4-b435-737ca9ef88d0 00:31:00.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:00.552 Nvme0n1 : 2.00 23050.50 90.04 0.00 0.00 0.00 0.00 0.00 00:31:00.552 [2024-10-08T16:38:53.875Z] =================================================================================================================== 00:31:00.552 [2024-10-08T16:38:53.875Z] Total : 23050.50 90.04 0.00 0.00 0.00 0.00 0.00 00:31:00.552 00:31:00.811 true 00:31:00.811 18:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d939913-82b2-44d4-b435-737ca9ef88d0 00:31:00.811 18:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:00.811 18:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:00.811 18:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:00.811 18:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 621575 00:31:01.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:01.748 Nvme0n1 : 3.00 23177.67 90.54 0.00 0.00 0.00 0.00 0.00 00:31:01.748 [2024-10-08T16:38:55.071Z] =================================================================================================================== 00:31:01.748 [2024-10-08T16:38:55.071Z] Total : 23177.67 90.54 0.00 0.00 0.00 0.00 0.00 00:31:01.748 00:31:02.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:02.684 Nvme0n1 : 4.00 23253.25 90.83 0.00 0.00 0.00 0.00 0.00 00:31:02.684 [2024-10-08T16:38:56.007Z] =================================================================================================================== 00:31:02.684 [2024-10-08T16:38:56.007Z] Total : 23253.25 90.83 0.00 0.00 0.00 0.00 0.00 00:31:02.684 00:31:03.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:03.619 Nvme0n1 : 5.00 23327.00 91.12 0.00 0.00 0.00 0.00 0.00 00:31:03.619 [2024-10-08T16:38:56.942Z] =================================================================================================================== 00:31:03.619 [2024-10-08T16:38:56.942Z] Total : 23327.00 91.12 0.00 0.00 0.00 0.00 0.00 00:31:03.619 00:31:04.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:04.555 Nvme0n1 : 6.00 23376.17 91.31 0.00 0.00 0.00 0.00 0.00 00:31:04.555 [2024-10-08T16:38:57.878Z] =================================================================================================================== 00:31:04.555 [2024-10-08T16:38:57.878Z] Total : 23376.17 91.31 0.00 0.00 0.00 0.00 0.00 00:31:04.555 00:31:05.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:05.491 Nvme0n1 : 7.00 23411.29 91.45 0.00 0.00 0.00 0.00 0.00 00:31:05.491 [2024-10-08T16:38:58.814Z] =================================================================================================================== 00:31:05.491 [2024-10-08T16:38:58.814Z] Total : 23411.29 91.45 0.00 0.00 0.00 0.00 0.00 00:31:05.491 00:31:06.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:06.867 Nvme0n1 : 8.00 23445.62 91.58 0.00 0.00 0.00 0.00 0.00 00:31:06.867 [2024-10-08T16:39:00.190Z] =================================================================================================================== 00:31:06.867 [2024-10-08T16:39:00.190Z] Total : 23445.62 91.58 0.00 0.00 0.00 0.00 0.00 00:31:06.867 00:31:07.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:07.802 Nvme0n1 : 9.00 23419.56 91.48 0.00 0.00 0.00 0.00 0.00 00:31:07.802 [2024-10-08T16:39:01.125Z] =================================================================================================================== 00:31:07.802 [2024-10-08T16:39:01.125Z] Total : 23419.56 91.48 0.00 0.00 0.00 0.00 0.00 00:31:07.802 00:31:08.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:08.737 Nvme0n1 : 10.00 23420.80 91.49 0.00 0.00 0.00 0.00 0.00 00:31:08.737 [2024-10-08T16:39:02.060Z] =================================================================================================================== 00:31:08.737 [2024-10-08T16:39:02.060Z] Total : 23420.80 91.49 0.00 0.00 0.00 0.00 0.00 00:31:08.737 00:31:08.737 00:31:08.737 Latency(us) 00:31:08.737 [2024-10-08T16:39:02.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:08.737 Nvme0n1 : 10.00 23421.96 91.49 0.00 0.00 5461.94 3214.38 27213.04 00:31:08.737 [2024-10-08T16:39:02.060Z] =================================================================================================================== 00:31:08.737 [2024-10-08T16:39:02.060Z] Total : 23421.96 91.49 0.00 0.00 5461.94 3214.38 27213.04 00:31:08.737 { 00:31:08.737 "results": [ 00:31:08.737 { 00:31:08.737 "job": "Nvme0n1", 00:31:08.737 "core_mask": "0x2", 00:31:08.737 "workload": "randwrite", 00:31:08.737 "status": "finished", 00:31:08.737 "queue_depth": 128, 00:31:08.737 "io_size": 4096, 00:31:08.737 "runtime": 10.002239, 00:31:08.737 "iops": 23421.955824090986, 00:31:08.737 "mibps": 91.49201493785542, 00:31:08.737 "io_failed": 0, 00:31:08.737 "io_timeout": 0, 00:31:08.737 "avg_latency_us": 5461.937611957774, 00:31:08.737 "min_latency_us": 3214.384761904762, 00:31:08.737 "max_latency_us": 27213.04380952381 00:31:08.737 } 00:31:08.737 ], 00:31:08.737 "core_count": 1 00:31:08.737 } 00:31:08.737 18:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 621344 00:31:08.737 18:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 621344 ']' 00:31:08.737 18:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 621344 00:31:08.737 18:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:31:08.737 18:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:08.737 18:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 621344 00:31:08.737 18:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:08.737 18:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:08.737 18:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 621344' 00:31:08.737 killing process with pid 621344 00:31:08.737 18:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 621344 00:31:08.737 Received shutdown signal, test time was about 10.000000 seconds 00:31:08.737 00:31:08.737 Latency(us) 00:31:08.737 [2024-10-08T16:39:02.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.737 [2024-10-08T16:39:02.060Z] =================================================================================================================== 00:31:08.737 [2024-10-08T16:39:02.060Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:08.737 18:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 621344 00:31:08.996 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:08.996 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:09.255 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d939913-82b2-44d4-b435-737ca9ef88d0 00:31:09.255 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 618057 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 618057 00:31:09.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 618057 Killed "${NVMF_APP[@]}" "$@" 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=623342 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 623342 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 623342 ']' 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:09.513 18:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:09.513 [2024-10-08 18:39:02.760746] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:09.513 [2024-10-08 18:39:02.761673] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:31:09.513 [2024-10-08 18:39:02.761712] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:09.772 [2024-10-08 18:39:02.836265] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.772 [2024-10-08 18:39:02.907972] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:09.772 [2024-10-08 18:39:02.908009] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:09.772 [2024-10-08 18:39:02.908016] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:09.772 [2024-10-08 18:39:02.908022] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:09.772 [2024-10-08 18:39:02.908028] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:09.772 [2024-10-08 18:39:02.908582] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.772 [2024-10-08 18:39:02.976263] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:09.772 [2024-10-08 18:39:02.976500] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:10.340 18:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:10.340 18:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:31:10.340 18:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:10.340 18:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:10.340 18:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:10.340 18:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:10.340 18:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:10.599 [2024-10-08 18:39:03.817982] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:10.599 [2024-10-08 18:39:03.818185] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:10.599 [2024-10-08 18:39:03.818268] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:10.599 18:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:10.599 18:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 61e435e0-9bfd-452d-b740-37858f92414e 00:31:10.599 18:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=61e435e0-9bfd-452d-b740-37858f92414e 00:31:10.599 18:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:10.599 18:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:31:10.599 18:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:10.599 18:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:10.599 18:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:10.858 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 61e435e0-9bfd-452d-b740-37858f92414e -t 2000 00:31:11.116 [ 00:31:11.116 { 00:31:11.116 "name": "61e435e0-9bfd-452d-b740-37858f92414e", 00:31:11.116 "aliases": [ 00:31:11.116 "lvs/lvol" 00:31:11.116 ], 00:31:11.116 "product_name": "Logical Volume", 00:31:11.116 "block_size": 4096, 00:31:11.116 "num_blocks": 38912, 00:31:11.117 "uuid": "61e435e0-9bfd-452d-b740-37858f92414e", 00:31:11.117 "assigned_rate_limits": { 00:31:11.117 "rw_ios_per_sec": 0, 00:31:11.117 "rw_mbytes_per_sec": 0, 00:31:11.117 "r_mbytes_per_sec": 0, 00:31:11.117 "w_mbytes_per_sec": 0 00:31:11.117 }, 00:31:11.117 "claimed": false, 00:31:11.117 "zoned": false, 00:31:11.117 "supported_io_types": { 00:31:11.117 "read": true, 00:31:11.117 "write": true, 00:31:11.117 "unmap": true, 00:31:11.117 "flush": false, 00:31:11.117 "reset": true, 00:31:11.117 "nvme_admin": false, 00:31:11.117 "nvme_io": false, 00:31:11.117 "nvme_io_md": false, 00:31:11.117 "write_zeroes": true, 00:31:11.117 "zcopy": false, 00:31:11.117 "get_zone_info": false, 00:31:11.117 "zone_management": false, 00:31:11.117 "zone_append": false, 00:31:11.117 "compare": false, 00:31:11.117 "compare_and_write": false, 00:31:11.117 "abort": false, 00:31:11.117 "seek_hole": true, 00:31:11.117 "seek_data": true, 00:31:11.117 "copy": false, 00:31:11.117 "nvme_iov_md": false 00:31:11.117 }, 00:31:11.117 "driver_specific": { 00:31:11.117 "lvol": { 00:31:11.117 "lvol_store_uuid": "5d939913-82b2-44d4-b435-737ca9ef88d0", 00:31:11.117 "base_bdev": "aio_bdev", 00:31:11.117 "thin_provision": false, 00:31:11.117 "num_allocated_clusters": 38, 00:31:11.117 "snapshot": false, 00:31:11.117 "clone": false, 00:31:11.117 "esnap_clone": false 00:31:11.117 } 00:31:11.117 } 00:31:11.117 } 00:31:11.117 ] 00:31:11.117 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:31:11.117 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d939913-82b2-44d4-b435-737ca9ef88d0 00:31:11.117 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:11.117 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:11.117 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d939913-82b2-44d4-b435-737ca9ef88d0 00:31:11.117 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:11.376 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:11.376 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:11.634 [2024-10-08 18:39:04.797062] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:11.634 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d939913-82b2-44d4-b435-737ca9ef88d0 00:31:11.634 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:31:11.634 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d939913-82b2-44d4-b435-737ca9ef88d0 00:31:11.634 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.634 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:11.634 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.634 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:11.634 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.634 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:11.634 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.634 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:11.634 18:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d939913-82b2-44d4-b435-737ca9ef88d0 00:31:11.893 request: 00:31:11.893 { 00:31:11.893 "uuid": "5d939913-82b2-44d4-b435-737ca9ef88d0", 00:31:11.893 "method": "bdev_lvol_get_lvstores", 00:31:11.893 "req_id": 1 00:31:11.893 } 00:31:11.893 Got JSON-RPC error response 00:31:11.893 response: 00:31:11.893 { 00:31:11.893 "code": -19, 00:31:11.893 "message": "No such device" 00:31:11.893 } 00:31:11.893 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:31:11.893 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:11.893 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:11.893 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:11.893 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:12.152 aio_bdev 00:31:12.152 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 61e435e0-9bfd-452d-b740-37858f92414e 00:31:12.152 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=61e435e0-9bfd-452d-b740-37858f92414e 00:31:12.152 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:12.152 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:31:12.152 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:12.152 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:12.152 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:12.152 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 61e435e0-9bfd-452d-b740-37858f92414e -t 2000 00:31:12.411 [ 00:31:12.411 { 00:31:12.411 "name": "61e435e0-9bfd-452d-b740-37858f92414e", 00:31:12.411 "aliases": [ 00:31:12.411 "lvs/lvol" 00:31:12.411 ], 00:31:12.411 "product_name": "Logical Volume", 00:31:12.411 "block_size": 4096, 00:31:12.411 "num_blocks": 38912, 00:31:12.411 "uuid": "61e435e0-9bfd-452d-b740-37858f92414e", 00:31:12.411 "assigned_rate_limits": { 00:31:12.411 "rw_ios_per_sec": 0, 00:31:12.411 "rw_mbytes_per_sec": 0, 00:31:12.411 "r_mbytes_per_sec": 0, 00:31:12.411 "w_mbytes_per_sec": 0 00:31:12.411 }, 00:31:12.411 "claimed": false, 00:31:12.411 "zoned": false, 00:31:12.411 "supported_io_types": { 00:31:12.411 "read": true, 00:31:12.411 "write": true, 00:31:12.411 "unmap": true, 00:31:12.411 "flush": false, 00:31:12.411 "reset": true, 00:31:12.411 "nvme_admin": false, 00:31:12.411 "nvme_io": false, 00:31:12.411 "nvme_io_md": false, 00:31:12.411 "write_zeroes": true, 00:31:12.411 "zcopy": false, 00:31:12.411 "get_zone_info": false, 00:31:12.411 "zone_management": false, 00:31:12.411 "zone_append": false, 00:31:12.411 "compare": false, 00:31:12.411 "compare_and_write": false, 00:31:12.411 "abort": false, 00:31:12.411 "seek_hole": true, 00:31:12.411 "seek_data": true, 00:31:12.411 "copy": false, 00:31:12.411 "nvme_iov_md": false 00:31:12.411 }, 00:31:12.411 "driver_specific": { 00:31:12.411 "lvol": { 00:31:12.411 "lvol_store_uuid": "5d939913-82b2-44d4-b435-737ca9ef88d0", 00:31:12.411 "base_bdev": "aio_bdev", 00:31:12.411 "thin_provision": false, 00:31:12.411 "num_allocated_clusters": 38, 00:31:12.411 "snapshot": false, 00:31:12.411 "clone": false, 00:31:12.411 "esnap_clone": false 00:31:12.411 } 00:31:12.411 } 00:31:12.411 } 00:31:12.411 ] 00:31:12.411 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:31:12.411 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d939913-82b2-44d4-b435-737ca9ef88d0 00:31:12.411 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:12.669 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:12.670 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d939913-82b2-44d4-b435-737ca9ef88d0 00:31:12.670 18:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:12.928 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:12.928 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 61e435e0-9bfd-452d-b740-37858f92414e 00:31:12.928 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5d939913-82b2-44d4-b435-737ca9ef88d0 00:31:13.186 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:13.445 00:31:13.445 real 0m18.246s 00:31:13.445 user 0m35.376s 00:31:13.445 sys 0m3.911s 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:13.445 ************************************ 00:31:13.445 END TEST lvs_grow_dirty 00:31:13.445 ************************************ 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:13.445 nvmf_trace.0 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.445 rmmod nvme_tcp 00:31:13.445 rmmod nvme_fabrics 00:31:13.445 rmmod nvme_keyring 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 623342 ']' 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 623342 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 623342 ']' 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 623342 00:31:13.445 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:31:13.704 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:13.704 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 623342 00:31:13.704 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:13.704 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:13.704 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 623342' 00:31:13.704 killing process with pid 623342 00:31:13.704 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 623342 00:31:13.704 18:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 623342 00:31:13.704 18:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:13.704 18:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:13.704 18:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:13.704 18:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:13.704 18:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:31:13.704 18:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:13.704 18:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:31:13.964 18:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:13.964 18:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:13.964 18:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.964 18:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.964 18:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.870 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:15.870 00:31:15.870 real 0m44.231s 00:31:15.870 user 0m53.904s 00:31:15.870 sys 0m10.380s 00:31:15.870 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:15.870 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:15.870 ************************************ 00:31:15.870 END TEST nvmf_lvs_grow 00:31:15.870 ************************************ 00:31:15.870 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:15.870 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:15.870 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:15.870 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:15.870 ************************************ 00:31:15.870 START TEST nvmf_bdev_io_wait 00:31:15.870 ************************************ 00:31:15.870 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:16.130 * Looking for test storage... 00:31:16.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:16.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.130 --rc genhtml_branch_coverage=1 00:31:16.130 --rc genhtml_function_coverage=1 00:31:16.130 --rc genhtml_legend=1 00:31:16.130 --rc geninfo_all_blocks=1 00:31:16.130 --rc geninfo_unexecuted_blocks=1 00:31:16.130 00:31:16.130 ' 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:16.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.130 --rc genhtml_branch_coverage=1 00:31:16.130 --rc genhtml_function_coverage=1 00:31:16.130 --rc genhtml_legend=1 00:31:16.130 --rc geninfo_all_blocks=1 00:31:16.130 --rc geninfo_unexecuted_blocks=1 00:31:16.130 00:31:16.130 ' 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:16.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.130 --rc genhtml_branch_coverage=1 00:31:16.130 --rc genhtml_function_coverage=1 00:31:16.130 --rc genhtml_legend=1 00:31:16.130 --rc geninfo_all_blocks=1 00:31:16.130 --rc geninfo_unexecuted_blocks=1 00:31:16.130 00:31:16.130 ' 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:16.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.130 --rc genhtml_branch_coverage=1 00:31:16.130 --rc genhtml_function_coverage=1 00:31:16.130 --rc genhtml_legend=1 00:31:16.130 --rc geninfo_all_blocks=1 00:31:16.130 --rc geninfo_unexecuted_blocks=1 00:31:16.130 00:31:16.130 ' 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:16.130 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:16.131 18:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.697 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:22.697 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:22.698 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.698 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.698 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.698 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.698 18:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:22.698 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:22.698 Found net devices under 0000:86:00.0: cvl_0_0 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:22.698 Found net devices under 0000:86:00.1: cvl_0_1 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:22.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:22.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:31:22.698 00:31:22.698 --- 10.0.0.2 ping statistics --- 00:31:22.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.698 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:22.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:22.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:31:22.698 00:31:22.698 --- 10.0.0.1 ping statistics --- 00:31:22.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.698 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=627970 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 627970 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 627970 ']' 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:22.698 18:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.698 [2024-10-08 18:39:15.335052] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:22.698 [2024-10-08 18:39:15.335977] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:31:22.698 [2024-10-08 18:39:15.336011] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.698 [2024-10-08 18:39:15.409137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:22.698 [2024-10-08 18:39:15.488855] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.698 [2024-10-08 18:39:15.488890] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.698 [2024-10-08 18:39:15.488897] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:22.699 [2024-10-08 18:39:15.488904] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:22.699 [2024-10-08 18:39:15.488909] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.699 [2024-10-08 18:39:15.490331] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.699 [2024-10-08 18:39:15.490369] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:22.699 [2024-10-08 18:39:15.490398] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:31:22.699 [2024-10-08 18:39:15.490401] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.699 [2024-10-08 18:39:15.490876] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:22.958 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:22.958 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:31:22.958 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:22.958 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:22.958 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.958 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:22.958 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:22.958 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.958 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.958 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.958 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:22.958 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.958 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:23.217 [2024-10-08 18:39:16.293855] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:23.217 [2024-10-08 18:39:16.294016] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:23.217 [2024-10-08 18:39:16.294373] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:23.217 [2024-10-08 18:39:16.294649] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:23.217 [2024-10-08 18:39:16.303246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:23.217 Malloc0 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:23.217 [2024-10-08 18:39:16.391610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=628216 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=628218 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:23.217 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:23.217 { 00:31:23.217 "params": { 00:31:23.217 "name": "Nvme$subsystem", 00:31:23.217 "trtype": "$TEST_TRANSPORT", 00:31:23.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.218 "adrfam": "ipv4", 00:31:23.218 "trsvcid": "$NVMF_PORT", 00:31:23.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.218 "hdgst": ${hdgst:-false}, 00:31:23.218 "ddgst": ${ddgst:-false} 00:31:23.218 }, 00:31:23.218 "method": "bdev_nvme_attach_controller" 00:31:23.218 } 00:31:23.218 EOF 00:31:23.218 )") 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=628220 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:23.218 { 00:31:23.218 "params": { 00:31:23.218 "name": "Nvme$subsystem", 00:31:23.218 "trtype": "$TEST_TRANSPORT", 00:31:23.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.218 "adrfam": "ipv4", 00:31:23.218 "trsvcid": "$NVMF_PORT", 00:31:23.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.218 "hdgst": ${hdgst:-false}, 00:31:23.218 "ddgst": ${ddgst:-false} 00:31:23.218 }, 00:31:23.218 "method": "bdev_nvme_attach_controller" 00:31:23.218 } 00:31:23.218 EOF 00:31:23.218 )") 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=628223 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:23.218 { 00:31:23.218 "params": { 00:31:23.218 "name": "Nvme$subsystem", 00:31:23.218 "trtype": "$TEST_TRANSPORT", 00:31:23.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.218 "adrfam": "ipv4", 00:31:23.218 "trsvcid": "$NVMF_PORT", 00:31:23.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.218 "hdgst": ${hdgst:-false}, 00:31:23.218 "ddgst": ${ddgst:-false} 00:31:23.218 }, 00:31:23.218 "method": "bdev_nvme_attach_controller" 00:31:23.218 } 00:31:23.218 EOF 00:31:23.218 )") 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:23.218 { 00:31:23.218 "params": { 00:31:23.218 "name": "Nvme$subsystem", 00:31:23.218 "trtype": "$TEST_TRANSPORT", 00:31:23.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.218 "adrfam": "ipv4", 00:31:23.218 "trsvcid": "$NVMF_PORT", 00:31:23.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.218 "hdgst": ${hdgst:-false}, 00:31:23.218 "ddgst": ${ddgst:-false} 00:31:23.218 }, 00:31:23.218 "method": "bdev_nvme_attach_controller" 00:31:23.218 } 00:31:23.218 EOF 00:31:23.218 )") 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 628216 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:23.218 "params": { 00:31:23.218 "name": "Nvme1", 00:31:23.218 "trtype": "tcp", 00:31:23.218 "traddr": "10.0.0.2", 00:31:23.218 "adrfam": "ipv4", 00:31:23.218 "trsvcid": "4420", 00:31:23.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:23.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:23.218 "hdgst": false, 00:31:23.218 "ddgst": false 00:31:23.218 }, 00:31:23.218 "method": "bdev_nvme_attach_controller" 00:31:23.218 }' 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:23.218 "params": { 00:31:23.218 "name": "Nvme1", 00:31:23.218 "trtype": "tcp", 00:31:23.218 "traddr": "10.0.0.2", 00:31:23.218 "adrfam": "ipv4", 00:31:23.218 "trsvcid": "4420", 00:31:23.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:23.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:23.218 "hdgst": false, 00:31:23.218 "ddgst": false 00:31:23.218 }, 00:31:23.218 "method": "bdev_nvme_attach_controller" 00:31:23.218 }' 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:23.218 "params": { 00:31:23.218 "name": "Nvme1", 00:31:23.218 "trtype": "tcp", 00:31:23.218 "traddr": "10.0.0.2", 00:31:23.218 "adrfam": "ipv4", 00:31:23.218 "trsvcid": "4420", 00:31:23.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:23.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:23.218 "hdgst": false, 00:31:23.218 "ddgst": false 00:31:23.218 }, 00:31:23.218 "method": "bdev_nvme_attach_controller" 00:31:23.218 }' 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:31:23.218 18:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:23.218 "params": { 00:31:23.218 "name": "Nvme1", 00:31:23.218 "trtype": "tcp", 00:31:23.218 "traddr": "10.0.0.2", 00:31:23.218 "adrfam": "ipv4", 00:31:23.218 "trsvcid": "4420", 00:31:23.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:23.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:23.218 "hdgst": false, 00:31:23.218 "ddgst": false 00:31:23.218 }, 00:31:23.218 "method": "bdev_nvme_attach_controller" 00:31:23.218 }' 00:31:23.218 [2024-10-08 18:39:16.443563] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:31:23.218 [2024-10-08 18:39:16.443563] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:31:23.218 [2024-10-08 18:39:16.443626] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-10-08 18:39:16.443627] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:31:23.218 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:23.218 [2024-10-08 18:39:16.443754] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:31:23.218 [2024-10-08 18:39:16.443798] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:23.218 [2024-10-08 18:39:16.448492] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:31:23.218 [2024-10-08 18:39:16.448541] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:31:23.477 [2024-10-08 18:39:16.638254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.477 [2024-10-08 18:39:16.715160] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:31:23.477 [2024-10-08 18:39:16.732570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.735 [2024-10-08 18:39:16.809682] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:31:23.735 [2024-10-08 18:39:16.823270] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.735 [2024-10-08 18:39:16.877318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.735 [2024-10-08 18:39:16.906256] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:31:23.735 [2024-10-08 18:39:16.954252] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:31:23.994 Running I/O for 1 seconds... 00:31:23.994 Running I/O for 1 seconds... 00:31:23.994 Running I/O for 1 seconds... 00:31:24.252 Running I/O for 1 seconds... 00:31:25.188 11951.00 IOPS, 46.68 MiB/s [2024-10-08T16:39:18.511Z] 9984.00 IOPS, 39.00 MiB/s 00:31:25.188 Latency(us) 00:31:25.188 [2024-10-08T16:39:18.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.188 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:25.188 Nvme1n1 : 1.01 11996.50 46.86 0.00 0.00 10629.30 4930.80 13918.60 00:31:25.188 [2024-10-08T16:39:18.511Z] =================================================================================================================== 00:31:25.188 [2024-10-08T16:39:18.511Z] Total : 11996.50 46.86 0.00 0.00 10629.30 4930.80 13918.60 00:31:25.188 00:31:25.188 Latency(us) 00:31:25.188 [2024-10-08T16:39:18.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.188 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:25.188 Nvme1n1 : 1.01 10056.79 39.28 0.00 0.00 12679.84 6179.11 16727.28 00:31:25.188 [2024-10-08T16:39:18.511Z] =================================================================================================================== 00:31:25.188 [2024-10-08T16:39:18.511Z] Total : 10056.79 39.28 0.00 0.00 12679.84 6179.11 16727.28 00:31:25.188 11649.00 IOPS, 45.50 MiB/s 00:31:25.188 Latency(us) 00:31:25.188 [2024-10-08T16:39:18.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.188 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:25.188 Nvme1n1 : 1.01 11757.25 45.93 0.00 0.00 10862.67 3167.57 16227.96 00:31:25.188 [2024-10-08T16:39:18.511Z] =================================================================================================================== 00:31:25.188 [2024-10-08T16:39:18.511Z] Total : 11757.25 45.93 0.00 0.00 10862.67 3167.57 16227.96 00:31:25.188 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 628218 00:31:25.188 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 628220 00:31:25.188 253936.00 IOPS, 991.94 MiB/s 00:31:25.188 Latency(us) 00:31:25.188 [2024-10-08T16:39:18.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.188 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:25.188 Nvme1n1 : 1.00 253549.48 990.43 0.00 0.00 502.76 230.16 1497.97 00:31:25.188 [2024-10-08T16:39:18.511Z] =================================================================================================================== 00:31:25.188 [2024-10-08T16:39:18.511Z] Total : 253549.48 990.43 0.00 0.00 502.76 230.16 1497.97 00:31:25.447 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 628223 00:31:25.447 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:25.447 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.447 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:25.447 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.447 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:25.447 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:25.447 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:25.447 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:25.447 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:25.447 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:25.447 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:25.447 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:25.447 rmmod nvme_tcp 00:31:25.447 rmmod nvme_fabrics 00:31:25.706 rmmod nvme_keyring 00:31:25.706 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:25.706 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:25.706 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:25.706 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 627970 ']' 00:31:25.706 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 627970 00:31:25.706 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 627970 ']' 00:31:25.706 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 627970 00:31:25.706 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:31:25.706 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:25.706 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 627970 00:31:25.706 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:25.706 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:25.706 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 627970' 00:31:25.706 killing process with pid 627970 00:31:25.706 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 627970 00:31:25.706 18:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 627970 00:31:25.965 18:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:25.965 18:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:25.965 18:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:25.965 18:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:25.965 18:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:31:25.965 18:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:25.965 18:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:31:25.965 18:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:25.965 18:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:25.965 18:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.965 18:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.965 18:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.867 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:27.867 00:31:27.867 real 0m11.947s 00:31:27.867 user 0m17.261s 00:31:27.867 sys 0m6.923s 00:31:27.867 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:27.867 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.867 ************************************ 00:31:27.867 END TEST nvmf_bdev_io_wait 00:31:27.867 ************************************ 00:31:27.867 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:27.867 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:27.867 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:27.867 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:27.867 ************************************ 00:31:27.867 START TEST nvmf_queue_depth 00:31:27.867 ************************************ 00:31:27.867 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:28.126 * Looking for test storage... 00:31:28.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:28.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.126 --rc genhtml_branch_coverage=1 00:31:28.126 --rc genhtml_function_coverage=1 00:31:28.126 --rc genhtml_legend=1 00:31:28.126 --rc geninfo_all_blocks=1 00:31:28.126 --rc geninfo_unexecuted_blocks=1 00:31:28.126 00:31:28.126 ' 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:28.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.126 --rc genhtml_branch_coverage=1 00:31:28.126 --rc genhtml_function_coverage=1 00:31:28.126 --rc genhtml_legend=1 00:31:28.126 --rc geninfo_all_blocks=1 00:31:28.126 --rc geninfo_unexecuted_blocks=1 00:31:28.126 00:31:28.126 ' 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:28.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.126 --rc genhtml_branch_coverage=1 00:31:28.126 --rc genhtml_function_coverage=1 00:31:28.126 --rc genhtml_legend=1 00:31:28.126 --rc geninfo_all_blocks=1 00:31:28.126 --rc geninfo_unexecuted_blocks=1 00:31:28.126 00:31:28.126 ' 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:28.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.126 --rc genhtml_branch_coverage=1 00:31:28.126 --rc genhtml_function_coverage=1 00:31:28.126 --rc genhtml_legend=1 00:31:28.126 --rc geninfo_all_blocks=1 00:31:28.126 --rc geninfo_unexecuted_blocks=1 00:31:28.126 00:31:28.126 ' 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.126 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:28.127 18:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:34.692 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:34.693 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:34.693 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:34.693 Found net devices under 0000:86:00.0: cvl_0_0 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:34.693 18:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:34.693 Found net devices under 0000:86:00.1: cvl_0_1 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:34.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:34.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:31:34.693 00:31:34.693 --- 10.0.0.2 ping statistics --- 00:31:34.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.693 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:34.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:34.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:31:34.693 00:31:34.693 --- 10.0.0.1 ping statistics --- 00:31:34.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.693 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=632006 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 632006 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 632006 ']' 00:31:34.693 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:34.694 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:34.694 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:34.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:34.694 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:34.694 18:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:34.694 [2024-10-08 18:39:27.349093] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:34.694 [2024-10-08 18:39:27.350091] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:31:34.694 [2024-10-08 18:39:27.350131] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:34.694 [2024-10-08 18:39:27.424773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.694 [2024-10-08 18:39:27.505222] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:34.694 [2024-10-08 18:39:27.505256] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:34.694 [2024-10-08 18:39:27.505264] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:34.694 [2024-10-08 18:39:27.505270] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:34.694 [2024-10-08 18:39:27.505276] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:34.694 [2024-10-08 18:39:27.505793] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.694 [2024-10-08 18:39:27.572882] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:34.694 [2024-10-08 18:39:27.573098] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:34.952 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:34.952 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:31:34.952 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:34.952 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:34.952 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:34.952 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:34.952 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:34.952 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.952 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:34.952 [2024-10-08 18:39:28.230470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:34.952 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.952 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:34.952 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.952 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.210 Malloc0 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.210 [2024-10-08 18:39:28.302472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=632251 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 632251 /var/tmp/bdevperf.sock 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 632251 ']' 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:35.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:35.210 18:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.210 [2024-10-08 18:39:28.350869] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:31:35.210 [2024-10-08 18:39:28.350910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632251 ] 00:31:35.210 [2024-10-08 18:39:28.416965] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.210 [2024-10-08 18:39:28.494564] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.145 18:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:36.145 18:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:31:36.145 18:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:36.145 18:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.145 18:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:36.145 NVMe0n1 00:31:36.145 18:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.145 18:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:36.145 Running I/O for 10 seconds... 00:31:38.460 11690.00 IOPS, 45.66 MiB/s [2024-10-08T16:39:32.718Z] 12209.00 IOPS, 47.69 MiB/s [2024-10-08T16:39:33.652Z] 12281.00 IOPS, 47.97 MiB/s [2024-10-08T16:39:34.587Z] 12313.75 IOPS, 48.10 MiB/s [2024-10-08T16:39:35.521Z] 12388.40 IOPS, 48.39 MiB/s [2024-10-08T16:39:36.456Z] 12446.17 IOPS, 48.62 MiB/s [2024-10-08T16:39:37.395Z] 12445.57 IOPS, 48.62 MiB/s [2024-10-08T16:39:38.770Z] 12451.25 IOPS, 48.64 MiB/s [2024-10-08T16:39:39.706Z] 12499.89 IOPS, 48.83 MiB/s [2024-10-08T16:39:39.706Z] 12488.90 IOPS, 48.78 MiB/s 00:31:46.383 Latency(us) 00:31:46.383 [2024-10-08T16:39:39.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.383 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:46.383 Verification LBA range: start 0x0 length 0x4000 00:31:46.383 NVMe0n1 : 10.06 12512.42 48.88 0.00 0.00 81581.33 18849.40 53177.78 00:31:46.383 [2024-10-08T16:39:39.706Z] =================================================================================================================== 00:31:46.383 [2024-10-08T16:39:39.706Z] Total : 12512.42 48.88 0.00 0.00 81581.33 18849.40 53177.78 00:31:46.383 { 00:31:46.383 "results": [ 00:31:46.383 { 00:31:46.383 "job": "NVMe0n1", 00:31:46.383 "core_mask": "0x1", 00:31:46.383 "workload": "verify", 00:31:46.383 "status": "finished", 00:31:46.383 "verify_range": { 00:31:46.383 "start": 0, 00:31:46.383 "length": 16384 00:31:46.383 }, 00:31:46.383 "queue_depth": 1024, 00:31:46.383 "io_size": 4096, 00:31:46.383 "runtime": 10.062482, 00:31:46.383 "iops": 12512.419897993357, 00:31:46.383 "mibps": 48.87664022653655, 00:31:46.383 "io_failed": 0, 00:31:46.383 "io_timeout": 0, 00:31:46.383 "avg_latency_us": 81581.33291336772, 00:31:46.383 "min_latency_us": 18849.401904761904, 00:31:46.383 "max_latency_us": 53177.782857142854 00:31:46.383 } 00:31:46.383 ], 00:31:46.383 "core_count": 1 00:31:46.383 } 00:31:46.383 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 632251 00:31:46.383 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 632251 ']' 00:31:46.383 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 632251 00:31:46.383 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:31:46.383 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:46.383 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 632251 00:31:46.383 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:46.383 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:46.383 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 632251' 00:31:46.383 killing process with pid 632251 00:31:46.383 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 632251 00:31:46.383 Received shutdown signal, test time was about 10.000000 seconds 00:31:46.383 00:31:46.383 Latency(us) 00:31:46.383 [2024-10-08T16:39:39.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.383 [2024-10-08T16:39:39.706Z] =================================================================================================================== 00:31:46.383 [2024-10-08T16:39:39.706Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:46.383 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 632251 00:31:46.383 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:46.383 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:46.383 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:46.383 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:46.642 rmmod nvme_tcp 00:31:46.642 rmmod nvme_fabrics 00:31:46.642 rmmod nvme_keyring 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 632006 ']' 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 632006 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 632006 ']' 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 632006 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 632006 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 632006' 00:31:46.642 killing process with pid 632006 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 632006 00:31:46.642 18:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 632006 00:31:46.900 18:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:46.900 18:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:46.900 18:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:46.900 18:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:46.900 18:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:31:46.900 18:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:46.900 18:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:31:46.900 18:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:46.900 18:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:46.900 18:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.900 18:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:46.900 18:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.804 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:48.804 00:31:48.804 real 0m20.902s 00:31:48.804 user 0m24.076s 00:31:48.804 sys 0m6.505s 00:31:48.804 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:48.804 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:48.804 ************************************ 00:31:48.804 END TEST nvmf_queue_depth 00:31:48.804 ************************************ 00:31:48.804 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:48.804 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:48.804 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:48.804 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:49.063 ************************************ 00:31:49.063 START TEST nvmf_target_multipath 00:31:49.063 ************************************ 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:49.063 * Looking for test storage... 00:31:49.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.063 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:49.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.064 --rc genhtml_branch_coverage=1 00:31:49.064 --rc genhtml_function_coverage=1 00:31:49.064 --rc genhtml_legend=1 00:31:49.064 --rc geninfo_all_blocks=1 00:31:49.064 --rc geninfo_unexecuted_blocks=1 00:31:49.064 00:31:49.064 ' 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:49.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.064 --rc genhtml_branch_coverage=1 00:31:49.064 --rc genhtml_function_coverage=1 00:31:49.064 --rc genhtml_legend=1 00:31:49.064 --rc geninfo_all_blocks=1 00:31:49.064 --rc geninfo_unexecuted_blocks=1 00:31:49.064 00:31:49.064 ' 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:49.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.064 --rc genhtml_branch_coverage=1 00:31:49.064 --rc genhtml_function_coverage=1 00:31:49.064 --rc genhtml_legend=1 00:31:49.064 --rc geninfo_all_blocks=1 00:31:49.064 --rc geninfo_unexecuted_blocks=1 00:31:49.064 00:31:49.064 ' 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:49.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.064 --rc genhtml_branch_coverage=1 00:31:49.064 --rc genhtml_function_coverage=1 00:31:49.064 --rc genhtml_legend=1 00:31:49.064 --rc geninfo_all_blocks=1 00:31:49.064 --rc geninfo_unexecuted_blocks=1 00:31:49.064 00:31:49.064 ' 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:49.064 18:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:55.631 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:55.631 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:55.631 Found net devices under 0000:86:00.0: cvl_0_0 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:55.631 Found net devices under 0000:86:00.1: cvl_0_1 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.631 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:55.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:31:55.632 00:31:55.632 --- 10.0.0.2 ping statistics --- 00:31:55.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.632 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:31:55.632 00:31:55.632 --- 10.0.0.1 ping statistics --- 00:31:55.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.632 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:55.632 only one NIC for nvmf test 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:55.632 rmmod nvme_tcp 00:31:55.632 rmmod nvme_fabrics 00:31:55.632 rmmod nvme_keyring 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.632 18:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:57.537 00:31:57.537 real 0m8.341s 00:31:57.537 user 0m1.796s 00:31:57.537 sys 0m4.550s 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:57.537 ************************************ 00:31:57.537 END TEST nvmf_target_multipath 00:31:57.537 ************************************ 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:57.537 ************************************ 00:31:57.537 START TEST nvmf_zcopy 00:31:57.537 ************************************ 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:57.537 * Looking for test storage... 00:31:57.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:57.537 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:57.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.538 --rc genhtml_branch_coverage=1 00:31:57.538 --rc genhtml_function_coverage=1 00:31:57.538 --rc genhtml_legend=1 00:31:57.538 --rc geninfo_all_blocks=1 00:31:57.538 --rc geninfo_unexecuted_blocks=1 00:31:57.538 00:31:57.538 ' 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:57.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.538 --rc genhtml_branch_coverage=1 00:31:57.538 --rc genhtml_function_coverage=1 00:31:57.538 --rc genhtml_legend=1 00:31:57.538 --rc geninfo_all_blocks=1 00:31:57.538 --rc geninfo_unexecuted_blocks=1 00:31:57.538 00:31:57.538 ' 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:57.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.538 --rc genhtml_branch_coverage=1 00:31:57.538 --rc genhtml_function_coverage=1 00:31:57.538 --rc genhtml_legend=1 00:31:57.538 --rc geninfo_all_blocks=1 00:31:57.538 --rc geninfo_unexecuted_blocks=1 00:31:57.538 00:31:57.538 ' 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:57.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.538 --rc genhtml_branch_coverage=1 00:31:57.538 --rc genhtml_function_coverage=1 00:31:57.538 --rc genhtml_legend=1 00:31:57.538 --rc geninfo_all_blocks=1 00:31:57.538 --rc geninfo_unexecuted_blocks=1 00:31:57.538 00:31:57.538 ' 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:57.538 18:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:04.325 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:04.326 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:04.326 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:04.326 Found net devices under 0000:86:00.0: cvl_0_0 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:04.326 Found net devices under 0000:86:00.1: cvl_0_1 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:04.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:04.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:32:04.326 00:32:04.326 --- 10.0.0.2 ping statistics --- 00:32:04.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.326 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:04.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:04.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:32:04.326 00:32:04.326 --- 10.0.0.1 ping statistics --- 00:32:04.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.326 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=640950 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 640950 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 640950 ']' 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.326 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:04.327 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:04.327 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:04.327 18:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:04.327 [2024-10-08 18:39:56.729553] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:04.327 [2024-10-08 18:39:56.730507] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:32:04.327 [2024-10-08 18:39:56.730542] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:04.327 [2024-10-08 18:39:56.802314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.327 [2024-10-08 18:39:56.878823] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:04.327 [2024-10-08 18:39:56.878858] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:04.327 [2024-10-08 18:39:56.878865] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:04.327 [2024-10-08 18:39:56.878872] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:04.327 [2024-10-08 18:39:56.878877] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:04.327 [2024-10-08 18:39:56.879429] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.327 [2024-10-08 18:39:56.944716] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:04.327 [2024-10-08 18:39:56.944930] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:04.327 [2024-10-08 18:39:57.612101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:04.327 [2024-10-08 18:39:57.636293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.327 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:04.586 malloc0 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:04.586 { 00:32:04.586 "params": { 00:32:04.586 "name": "Nvme$subsystem", 00:32:04.586 "trtype": "$TEST_TRANSPORT", 00:32:04.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:04.586 "adrfam": "ipv4", 00:32:04.586 "trsvcid": "$NVMF_PORT", 00:32:04.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:04.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:04.586 "hdgst": ${hdgst:-false}, 00:32:04.586 "ddgst": ${ddgst:-false} 00:32:04.586 }, 00:32:04.586 "method": "bdev_nvme_attach_controller" 00:32:04.586 } 00:32:04.586 EOF 00:32:04.586 )") 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:32:04.586 18:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:04.586 "params": { 00:32:04.586 "name": "Nvme1", 00:32:04.586 "trtype": "tcp", 00:32:04.586 "traddr": "10.0.0.2", 00:32:04.586 "adrfam": "ipv4", 00:32:04.586 "trsvcid": "4420", 00:32:04.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:04.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:04.586 "hdgst": false, 00:32:04.586 "ddgst": false 00:32:04.586 }, 00:32:04.586 "method": "bdev_nvme_attach_controller" 00:32:04.586 }' 00:32:04.586 [2024-10-08 18:39:57.743601] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:32:04.586 [2024-10-08 18:39:57.743655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid641152 ] 00:32:04.586 [2024-10-08 18:39:57.814136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.586 [2024-10-08 18:39:57.888273] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.845 Running I/O for 10 seconds... 00:32:07.156 8429.00 IOPS, 65.85 MiB/s [2024-10-08T16:40:01.415Z] 8483.00 IOPS, 66.27 MiB/s [2024-10-08T16:40:02.350Z] 8503.33 IOPS, 66.43 MiB/s [2024-10-08T16:40:03.284Z] 8530.75 IOPS, 66.65 MiB/s [2024-10-08T16:40:04.217Z] 8545.00 IOPS, 66.76 MiB/s [2024-10-08T16:40:05.152Z] 8556.67 IOPS, 66.85 MiB/s [2024-10-08T16:40:06.086Z] 8557.71 IOPS, 66.86 MiB/s [2024-10-08T16:40:07.461Z] 8563.25 IOPS, 66.90 MiB/s [2024-10-08T16:40:08.400Z] 8561.89 IOPS, 66.89 MiB/s [2024-10-08T16:40:08.400Z] 8561.40 IOPS, 66.89 MiB/s 00:32:15.077 Latency(us) 00:32:15.077 [2024-10-08T16:40:08.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.077 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:15.077 Verification LBA range: start 0x0 length 0x1000 00:32:15.077 Nvme1n1 : 10.01 8565.30 66.92 0.00 0.00 14902.75 807.50 22469.49 00:32:15.077 [2024-10-08T16:40:08.400Z] =================================================================================================================== 00:32:15.077 [2024-10-08T16:40:08.400Z] Total : 8565.30 66.92 0.00 0.00 14902.75 807.50 22469.49 00:32:15.077 18:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=642813 00:32:15.078 18:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:32:15.078 18:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:15.078 18:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:15.078 18:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:32:15.078 18:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:32:15.078 18:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:32:15.078 18:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:15.078 18:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:15.078 { 00:32:15.078 "params": { 00:32:15.078 "name": "Nvme$subsystem", 00:32:15.078 "trtype": "$TEST_TRANSPORT", 00:32:15.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:15.078 "adrfam": "ipv4", 00:32:15.078 "trsvcid": "$NVMF_PORT", 00:32:15.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:15.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:15.078 "hdgst": ${hdgst:-false}, 00:32:15.078 "ddgst": ${ddgst:-false} 00:32:15.078 }, 00:32:15.078 "method": "bdev_nvme_attach_controller" 00:32:15.078 } 00:32:15.078 EOF 00:32:15.078 )") 00:32:15.078 18:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:32:15.078 [2024-10-08 18:40:08.267774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.078 [2024-10-08 18:40:08.267806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.078 18:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:32:15.078 18:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:32:15.078 18:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:15.078 "params": { 00:32:15.078 "name": "Nvme1", 00:32:15.078 "trtype": "tcp", 00:32:15.078 "traddr": "10.0.0.2", 00:32:15.078 "adrfam": "ipv4", 00:32:15.078 "trsvcid": "4420", 00:32:15.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:15.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:15.078 "hdgst": false, 00:32:15.078 "ddgst": false 00:32:15.078 }, 00:32:15.078 "method": "bdev_nvme_attach_controller" 00:32:15.078 }' 00:32:15.078 [2024-10-08 18:40:08.279737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.078 [2024-10-08 18:40:08.279749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.078 [2024-10-08 18:40:08.291728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.078 [2024-10-08 18:40:08.291739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.078 [2024-10-08 18:40:08.303730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.078 [2024-10-08 18:40:08.303740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.078 [2024-10-08 18:40:08.305550] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:32:15.078 [2024-10-08 18:40:08.305594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid642813 ] 00:32:15.078 [2024-10-08 18:40:08.315734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.078 [2024-10-08 18:40:08.315747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.078 [2024-10-08 18:40:08.327727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.078 [2024-10-08 18:40:08.327737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.078 [2024-10-08 18:40:08.339732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.078 [2024-10-08 18:40:08.339744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.078 [2024-10-08 18:40:08.351731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.078 [2024-10-08 18:40:08.351740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.078 [2024-10-08 18:40:08.363732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.078 [2024-10-08 18:40:08.363741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.078 [2024-10-08 18:40:08.372619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.078 [2024-10-08 18:40:08.375732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.078 [2024-10-08 18:40:08.375743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.078 [2024-10-08 18:40:08.387729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.078 [2024-10-08 18:40:08.387741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.399738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.399752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.411737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.411755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.423733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.423750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.435733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.435744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.447703] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.344 [2024-10-08 18:40:08.447730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.447745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.459738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.459756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.471742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.471759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.483745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.483765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.495730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.495742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.507732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.507744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.519731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.519742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.531746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.531766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.543740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.543756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.555737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.555753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.567733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.567749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.579736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.579750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.591738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.591757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 Running I/O for 5 seconds... 00:32:15.344 [2024-10-08 18:40:08.603662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.603683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.617908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.617929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.632490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.632510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.647220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.647240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.344 [2024-10-08 18:40:08.662044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.344 [2024-10-08 18:40:08.662062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.676608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.676637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.689483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.689506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.704390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.704408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.715381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.715400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.729933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.729952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.744690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.744709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.760032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.760051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.775311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.775330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.788894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.788914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.799826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.799844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.813935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.813954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.828991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.829010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.843580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.843600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.856663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.856682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.871821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.871839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.882876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.882895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.897727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.897746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.912120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.912138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.603 [2024-10-08 18:40:08.924795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.603 [2024-10-08 18:40:08.924814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:08.939764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:08.939783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:08.953247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:08.953270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:08.967873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:08.967892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:08.978688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:08.978707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:08.993662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:08.993681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:09.008164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:09.008182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:09.020564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:09.020583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:09.032931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:09.032949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:09.048636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:09.048656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:09.064208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:09.064227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:09.076588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:09.076606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:09.089399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:09.089418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:09.104594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:09.104613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:09.119397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:09.119415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:09.132082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:09.132101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:09.145512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:09.145530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:09.160442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:09.160460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.862 [2024-10-08 18:40:09.175934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.862 [2024-10-08 18:40:09.175953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.120 [2024-10-08 18:40:09.188440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.120 [2024-10-08 18:40:09.188459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.201324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.201342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.216708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.216731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.231613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.231632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.242657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.242675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.257361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.257385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.272212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.272231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.284687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.284705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.299850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.299870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.310772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.310790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.325543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.325561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.340421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.340446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.356106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.356124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.368560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.368579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.381689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.381707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.396794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.396812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.411271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.411289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.424933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.424951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.121 [2024-10-08 18:40:09.435929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.121 [2024-10-08 18:40:09.435948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.378 [2024-10-08 18:40:09.449337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.378 [2024-10-08 18:40:09.449355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.378 [2024-10-08 18:40:09.464305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.378 [2024-10-08 18:40:09.464324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.378 [2024-10-08 18:40:09.479683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.378 [2024-10-08 18:40:09.479701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.378 [2024-10-08 18:40:09.493206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.378 [2024-10-08 18:40:09.493225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.378 [2024-10-08 18:40:09.508090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.378 [2024-10-08 18:40:09.508108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.378 [2024-10-08 18:40:09.523959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.378 [2024-10-08 18:40:09.523977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.378 [2024-10-08 18:40:09.537603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.378 [2024-10-08 18:40:09.537622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.378 [2024-10-08 18:40:09.551921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.378 [2024-10-08 18:40:09.551941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.378 [2024-10-08 18:40:09.564740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.378 [2024-10-08 18:40:09.564759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.378 [2024-10-08 18:40:09.579705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.378 [2024-10-08 18:40:09.579724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.378 [2024-10-08 18:40:09.590569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.378 [2024-10-08 18:40:09.590588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.378 16628.00 IOPS, 129.91 MiB/s [2024-10-08T16:40:09.701Z] [2024-10-08 18:40:09.605208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.378 [2024-10-08 18:40:09.605228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.378 [2024-10-08 18:40:09.619748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.378 [2024-10-08 18:40:09.619768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.379 [2024-10-08 18:40:09.632779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.379 [2024-10-08 18:40:09.632798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.379 [2024-10-08 18:40:09.644063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.379 [2024-10-08 18:40:09.644082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.379 [2024-10-08 18:40:09.657787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.379 [2024-10-08 18:40:09.657806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.379 [2024-10-08 18:40:09.672762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.379 [2024-10-08 18:40:09.672781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.379 [2024-10-08 18:40:09.687781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.379 [2024-10-08 18:40:09.687800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.379 [2024-10-08 18:40:09.700346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.379 [2024-10-08 18:40:09.700365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.713668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.713687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.728636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.728660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.743641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.743661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.758155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.758174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.772580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.772598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.787866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.787886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.801066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.801086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.815840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.815860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.828479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.828497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.841302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.841321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.856315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.856333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.871323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.871342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.884808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.884827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.897335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.897353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.912324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.912341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.924134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.924151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.937320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.937339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.637 [2024-10-08 18:40:09.952177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.637 [2024-10-08 18:40:09.952195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:09.963659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:09.963678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:09.977369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:09.977393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:09.992206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:09.992229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:10.003910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:10.003929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:10.017511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:10.017531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:10.032700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:10.032722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:10.048120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:10.048140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:10.063482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:10.063501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:10.078126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:10.078145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:10.092487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:10.092505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:10.107931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:10.107950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:10.119822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:10.119841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:10.134139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:10.134158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:10.149187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:10.149206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:10.164071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:10.164090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:10.177547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:10.177565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:10.192574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:10.192593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.896 [2024-10-08 18:40:10.207518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.896 [2024-10-08 18:40:10.207537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.219528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.219546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.233830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.233849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.248727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.248746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.263888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.263911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.276278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.276296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.289489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.289508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.304430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.304449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.320140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.320160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.335805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.335824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.348937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.348956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.361483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.361502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.376412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.376430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.392644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.392663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.407807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.407826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.418920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.418938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.433682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.433701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.448207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.448224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.155 [2024-10-08 18:40:10.462948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.155 [2024-10-08 18:40:10.462965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.477698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.477717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.492508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.492527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.507896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.507915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.521056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.521074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.532159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.532182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.545757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.545776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.560773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.560791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.575729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.575749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.589961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.589990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.604925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.604943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 16621.50 IOPS, 129.86 MiB/s [2024-10-08T16:40:10.737Z] [2024-10-08 18:40:10.619581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.619600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.632052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.632070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.645757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.645776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.660591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.660609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.672178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.672195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.685500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.685519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.700596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.700615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.716204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.716222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.414 [2024-10-08 18:40:10.731647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.414 [2024-10-08 18:40:10.731667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.744995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.745014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.755536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.755554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.769477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.769496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.784916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.784935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.799874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.799892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.811016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.811035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.825532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.825551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.840360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.840387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.855617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.855636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.868673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.868693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.881324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.881343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.896496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.896514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.911808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.911827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.923130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.923148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.937512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.937530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.952126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.952144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.968423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.968441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.673 [2024-10-08 18:40:10.984359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.673 [2024-10-08 18:40:10.984383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.000056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.000073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.015528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.015548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.029186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.029205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.044019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.044039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.056299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.056318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.069652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.069671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.084792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.084813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.100026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.100044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.112445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.112464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.125731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.125751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.140462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.140482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.155092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.155111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.170219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.170238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.184678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.184698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.199608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.199626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.213286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.213305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.228264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.228282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.932 [2024-10-08 18:40:11.244250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.932 [2024-10-08 18:40:11.244269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.259826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.259846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.272411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.272430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.287308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.287327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.300542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.300560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.313332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.313350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.328826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.328845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.343881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.343900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.355204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.355224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.369312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.369331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.384247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.384266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.399587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.399605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.413221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.413240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.427722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.427741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.440715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.440734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.456145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.456163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.468226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.468244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.481831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.481850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.496719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.496737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.190 [2024-10-08 18:40:11.511479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.190 [2024-10-08 18:40:11.511499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.448 [2024-10-08 18:40:11.524575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.448 [2024-10-08 18:40:11.524593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.448 [2024-10-08 18:40:11.539780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.448 [2024-10-08 18:40:11.539799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.448 [2024-10-08 18:40:11.551180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.449 [2024-10-08 18:40:11.551199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.449 [2024-10-08 18:40:11.565567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.449 [2024-10-08 18:40:11.565586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.449 [2024-10-08 18:40:11.580172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.449 [2024-10-08 18:40:11.580191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.449 [2024-10-08 18:40:11.596232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.449 [2024-10-08 18:40:11.596251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.449 16621.67 IOPS, 129.86 MiB/s [2024-10-08T16:40:11.772Z] [2024-10-08 18:40:11.608314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.449 [2024-10-08 18:40:11.608333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.449 [2024-10-08 18:40:11.621078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.449 [2024-10-08 18:40:11.621096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.449 [2024-10-08 18:40:11.632073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.449 [2024-10-08 18:40:11.632091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.449 [2024-10-08 18:40:11.645529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.449 [2024-10-08 18:40:11.645548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.449 [2024-10-08 18:40:11.660177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.449 [2024-10-08 18:40:11.660195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.449 [2024-10-08 18:40:11.675808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.449 [2024-10-08 18:40:11.675827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.449 [2024-10-08 18:40:11.688620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.449 [2024-10-08 18:40:11.688638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.449 [2024-10-08 18:40:11.699920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.449 [2024-10-08 18:40:11.699938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.449 [2024-10-08 18:40:11.713280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.449 [2024-10-08 18:40:11.713300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.449 [2024-10-08 18:40:11.724297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.449 [2024-10-08 18:40:11.724314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.449 [2024-10-08 18:40:11.736936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.449 [2024-10-08 18:40:11.736955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.449 [2024-10-08 18:40:11.751920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.449 [2024-10-08 18:40:11.751938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.449 [2024-10-08 18:40:11.763133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.449 [2024-10-08 18:40:11.763153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:11.777916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:11.777935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:11.792535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:11.792554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:11.807579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:11.807598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:11.820919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:11.820939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:11.835903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:11.835921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:11.847206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:11.847229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:11.861390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:11.861411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:11.876440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:11.876459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:11.891624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:11.891644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:11.904867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:11.904886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:11.919781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:11.919800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:11.930275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:11.930294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:11.944890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:11.944909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:11.959798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:11.959818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:11.971681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:11.971700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:11.985505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:11.985525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:12.000788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:12.000807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.707 [2024-10-08 18:40:12.016693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.707 [2024-10-08 18:40:12.016712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.031756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.031774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.042754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.042773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.057756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.057775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.072467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.072485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.087439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.087458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.101438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.101457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.116567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.116590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.132306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.132324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.144556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.144575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.155739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.155756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.169989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.170007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.184966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.184985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.199726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.199752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.211911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.211930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.225907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.225925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.240297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.240315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.251636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.251654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.265703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.265721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.966 [2024-10-08 18:40:12.280726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.966 [2024-10-08 18:40:12.280744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.295489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.295508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.308626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.308644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.323461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.323479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.336264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.336282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.349181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.349199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.363981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.364001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.374678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.374702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.389234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.389253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.403942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.403960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.414196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.414215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.429158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.429177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.443544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.443563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.457432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.457450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.472259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.472278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.487711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.487731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.501790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.501809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.516842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.516862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.531548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.531568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.225 [2024-10-08 18:40:12.544623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.225 [2024-10-08 18:40:12.544643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 [2024-10-08 18:40:12.559631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.559651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 [2024-10-08 18:40:12.570990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.571009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 [2024-10-08 18:40:12.585640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.585659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 [2024-10-08 18:40:12.600455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.600474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 16624.50 IOPS, 129.88 MiB/s [2024-10-08T16:40:12.807Z] [2024-10-08 18:40:12.615823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.615842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 [2024-10-08 18:40:12.626833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.626852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 [2024-10-08 18:40:12.642032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.642051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 [2024-10-08 18:40:12.656277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.656295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 [2024-10-08 18:40:12.671673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.671692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 [2024-10-08 18:40:12.684545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.684563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 [2024-10-08 18:40:12.696471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.696490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 [2024-10-08 18:40:12.711374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.711399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 [2024-10-08 18:40:12.724543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.724562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 [2024-10-08 18:40:12.739554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.739573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 [2024-10-08 18:40:12.754241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.754260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 [2024-10-08 18:40:12.769336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.769355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 [2024-10-08 18:40:12.784180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.784199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.484 [2024-10-08 18:40:12.796642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.484 [2024-10-08 18:40:12.796661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:12.809305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:12.809324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:12.825008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:12.825026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:12.839887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:12.839906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:12.851299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:12.851317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:12.865478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:12.865498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:12.880294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:12.880313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:12.896057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:12.896077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:12.908143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:12.908161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:12.921373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:12.921414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:12.936162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:12.936180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:12.951219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:12.951237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:12.965894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:12.965913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:12.980551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:12.980569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:12.996198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:12.996216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:13.011904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:13.011923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:13.025721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:13.025739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:13.040286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:13.040305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.743 [2024-10-08 18:40:13.055540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.743 [2024-10-08 18:40:13.055559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.002 [2024-10-08 18:40:13.069407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.002 [2024-10-08 18:40:13.069426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.002 [2024-10-08 18:40:13.084165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.002 [2024-10-08 18:40:13.084182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.002 [2024-10-08 18:40:13.096977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.002 [2024-10-08 18:40:13.096995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.002 [2024-10-08 18:40:13.112099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.002 [2024-10-08 18:40:13.112117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.002 [2024-10-08 18:40:13.127366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.002 [2024-10-08 18:40:13.127393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.002 [2024-10-08 18:40:13.141641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.002 [2024-10-08 18:40:13.141660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.002 [2024-10-08 18:40:13.156207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.002 [2024-10-08 18:40:13.156226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.002 [2024-10-08 18:40:13.171331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.002 [2024-10-08 18:40:13.171355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.002 [2024-10-08 18:40:13.185839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.002 [2024-10-08 18:40:13.185857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.002 [2024-10-08 18:40:13.200637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.002 [2024-10-08 18:40:13.200655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.002 [2024-10-08 18:40:13.215530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.002 [2024-10-08 18:40:13.215549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.002 [2024-10-08 18:40:13.228574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.002 [2024-10-08 18:40:13.228592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.002 [2024-10-08 18:40:13.243863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.002 [2024-10-08 18:40:13.243882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.003 [2024-10-08 18:40:13.255525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.003 [2024-10-08 18:40:13.255544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.003 [2024-10-08 18:40:13.270111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.003 [2024-10-08 18:40:13.270130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.003 [2024-10-08 18:40:13.285303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.003 [2024-10-08 18:40:13.285322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.003 [2024-10-08 18:40:13.300183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.003 [2024-10-08 18:40:13.300201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.003 [2024-10-08 18:40:13.313333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.003 [2024-10-08 18:40:13.313351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.261 [2024-10-08 18:40:13.328084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.261 [2024-10-08 18:40:13.328102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.261 [2024-10-08 18:40:13.341693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.261 [2024-10-08 18:40:13.341712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.261 [2024-10-08 18:40:13.356951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.261 [2024-10-08 18:40:13.356969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.261 [2024-10-08 18:40:13.371724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.261 [2024-10-08 18:40:13.371747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.261 [2024-10-08 18:40:13.383665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.261 [2024-10-08 18:40:13.383687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.261 [2024-10-08 18:40:13.398081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.261 [2024-10-08 18:40:13.398101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.261 [2024-10-08 18:40:13.412880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.261 [2024-10-08 18:40:13.412899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.261 [2024-10-08 18:40:13.427738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.261 [2024-10-08 18:40:13.427757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.261 [2024-10-08 18:40:13.441960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.261 [2024-10-08 18:40:13.441987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.261 [2024-10-08 18:40:13.456804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.261 [2024-10-08 18:40:13.456823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.261 [2024-10-08 18:40:13.471667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.261 [2024-10-08 18:40:13.471687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.261 [2024-10-08 18:40:13.484725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.261 [2024-10-08 18:40:13.484744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.261 [2024-10-08 18:40:13.500196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.261 [2024-10-08 18:40:13.500215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.261 [2024-10-08 18:40:13.515701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.261 [2024-10-08 18:40:13.515719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.262 [2024-10-08 18:40:13.529323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.262 [2024-10-08 18:40:13.529342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.262 [2024-10-08 18:40:13.544418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.262 [2024-10-08 18:40:13.544436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.262 [2024-10-08 18:40:13.558954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.262 [2024-10-08 18:40:13.558973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.262 [2024-10-08 18:40:13.573543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.262 [2024-10-08 18:40:13.573562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 [2024-10-08 18:40:13.588167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.588185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 [2024-10-08 18:40:13.604079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.604098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 16637.20 IOPS, 129.98 MiB/s [2024-10-08T16:40:13.844Z] [2024-10-08 18:40:13.618029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.618047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 00:32:20.521 Latency(us) 00:32:20.521 [2024-10-08T16:40:13.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.521 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:20.521 Nvme1n1 : 5.01 16638.56 129.99 0.00 0.00 7685.52 2059.70 12795.12 00:32:20.521 [2024-10-08T16:40:13.844Z] =================================================================================================================== 00:32:20.521 [2024-10-08T16:40:13.844Z] Total : 16638.56 129.99 0.00 0.00 7685.52 2059.70 12795.12 00:32:20.521 [2024-10-08 18:40:13.627738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.627754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 [2024-10-08 18:40:13.639739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.639755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 [2024-10-08 18:40:13.651744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.651761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 [2024-10-08 18:40:13.663740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.663763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 [2024-10-08 18:40:13.675741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.675754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 [2024-10-08 18:40:13.687735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.687749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 [2024-10-08 18:40:13.699732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.699746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 [2024-10-08 18:40:13.711731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.711744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 [2024-10-08 18:40:13.723732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.723745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 [2024-10-08 18:40:13.735730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.735740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 [2024-10-08 18:40:13.747737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.747748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 [2024-10-08 18:40:13.759730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.759741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 [2024-10-08 18:40:13.771728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.771738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 [2024-10-08 18:40:13.783732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.783742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 [2024-10-08 18:40:13.795729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.795739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 [2024-10-08 18:40:13.807731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.521 [2024-10-08 18:40:13.807740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (642813) - No such process 00:32:20.521 18:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 642813 00:32:20.521 18:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:20.521 18:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.521 18:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:20.521 18:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.521 18:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:20.521 18:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.521 18:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:20.521 delay0 00:32:20.521 18:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.521 18:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:20.521 18:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.521 18:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:20.780 18:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.780 18:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:20.780 [2024-10-08 18:40:13.905306] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:28.897 Initializing NVMe Controllers 00:32:28.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:28.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:28.897 Initialization complete. Launching workers. 00:32:28.897 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 266, failed: 21941 00:32:28.897 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22091, failed to submit 116 00:32:28.897 success 22002, unsuccessful 89, failed 0 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:28.897 rmmod nvme_tcp 00:32:28.897 rmmod nvme_fabrics 00:32:28.897 rmmod nvme_keyring 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 640950 ']' 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 640950 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 640950 ']' 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 640950 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 640950 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 640950' 00:32:28.897 killing process with pid 640950 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 640950 00:32:28.897 18:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 640950 00:32:28.897 18:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:28.897 18:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:28.897 18:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:28.897 18:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:28.897 18:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:32:28.897 18:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:28.897 18:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:32:28.897 18:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:28.897 18:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:28.897 18:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.897 18:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:28.897 18:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.834 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:29.834 00:32:29.834 real 0m32.579s 00:32:29.834 user 0m41.284s 00:32:29.834 sys 0m12.891s 00:32:29.834 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:29.834 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:29.834 ************************************ 00:32:29.834 END TEST nvmf_zcopy 00:32:29.834 ************************************ 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:30.093 ************************************ 00:32:30.093 START TEST nvmf_nmic 00:32:30.093 ************************************ 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:30.093 * Looking for test storage... 00:32:30.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:30.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.093 --rc genhtml_branch_coverage=1 00:32:30.093 --rc genhtml_function_coverage=1 00:32:30.093 --rc genhtml_legend=1 00:32:30.093 --rc geninfo_all_blocks=1 00:32:30.093 --rc geninfo_unexecuted_blocks=1 00:32:30.093 00:32:30.093 ' 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:30.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.093 --rc genhtml_branch_coverage=1 00:32:30.093 --rc genhtml_function_coverage=1 00:32:30.093 --rc genhtml_legend=1 00:32:30.093 --rc geninfo_all_blocks=1 00:32:30.093 --rc geninfo_unexecuted_blocks=1 00:32:30.093 00:32:30.093 ' 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:30.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.093 --rc genhtml_branch_coverage=1 00:32:30.093 --rc genhtml_function_coverage=1 00:32:30.093 --rc genhtml_legend=1 00:32:30.093 --rc geninfo_all_blocks=1 00:32:30.093 --rc geninfo_unexecuted_blocks=1 00:32:30.093 00:32:30.093 ' 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:30.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.093 --rc genhtml_branch_coverage=1 00:32:30.093 --rc genhtml_function_coverage=1 00:32:30.093 --rc genhtml_legend=1 00:32:30.093 --rc geninfo_all_blocks=1 00:32:30.093 --rc geninfo_unexecuted_blocks=1 00:32:30.093 00:32:30.093 ' 00:32:30.093 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:30.094 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:30.094 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:30.094 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:30.094 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:30.094 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:30.094 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:30.094 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:30.094 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:30.094 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:30.353 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:30.354 18:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:36.921 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:36.922 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:36.922 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:36.922 Found net devices under 0000:86:00.0: cvl_0_0 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:36.922 Found net devices under 0000:86:00.1: cvl_0_1 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:36.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:36.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:32:36.922 00:32:36.922 --- 10.0.0.2 ping statistics --- 00:32:36.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.922 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:36.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:36.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:32:36.922 00:32:36.922 --- 10.0.0.1 ping statistics --- 00:32:36.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.922 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=648332 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 648332 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 648332 ']' 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:36.922 18:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.922 [2024-10-08 18:40:29.353557] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:36.922 [2024-10-08 18:40:29.354523] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:32:36.922 [2024-10-08 18:40:29.354559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.922 [2024-10-08 18:40:29.426754] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:36.922 [2024-10-08 18:40:29.506346] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.923 [2024-10-08 18:40:29.506388] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.923 [2024-10-08 18:40:29.506395] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.923 [2024-10-08 18:40:29.506401] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.923 [2024-10-08 18:40:29.506406] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.923 [2024-10-08 18:40:29.507995] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.923 [2024-10-08 18:40:29.508031] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:36.923 [2024-10-08 18:40:29.508112] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.923 [2024-10-08 18:40:29.508113] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:32:36.923 [2024-10-08 18:40:29.588344] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:36.923 [2024-10-08 18:40:29.588383] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:36.923 [2024-10-08 18:40:29.589372] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:36.923 [2024-10-08 18:40:29.589432] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:36.923 [2024-10-08 18:40:29.589495] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:36.923 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:36.923 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:32:36.923 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:36.923 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:36.923 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.182 [2024-10-08 18:40:30.256974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.182 Malloc0 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.182 [2024-10-08 18:40:30.329161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:37.182 test case1: single bdev can't be used in multiple subsystems 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.182 [2024-10-08 18:40:30.360652] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:37.182 [2024-10-08 18:40:30.360674] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:37.182 [2024-10-08 18:40:30.360681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.182 request: 00:32:37.182 { 00:32:37.182 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:37.182 "namespace": { 00:32:37.182 "bdev_name": "Malloc0", 00:32:37.182 "no_auto_visible": false 00:32:37.182 }, 00:32:37.182 "method": "nvmf_subsystem_add_ns", 00:32:37.182 "req_id": 1 00:32:37.182 } 00:32:37.182 Got JSON-RPC error response 00:32:37.182 response: 00:32:37.182 { 00:32:37.182 "code": -32602, 00:32:37.182 "message": "Invalid parameters" 00:32:37.182 } 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:37.182 Adding namespace failed - expected result. 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:37.182 test case2: host connect to nvmf target in multiple paths 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.182 [2024-10-08 18:40:30.372750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.182 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:37.441 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:37.700 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:37.700 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:32:37.700 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:37.700 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:37.700 18:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:32:39.603 18:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:39.603 18:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:39.603 18:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:39.603 18:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:39.603 18:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:39.603 18:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:32:39.603 18:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:39.603 [global] 00:32:39.603 thread=1 00:32:39.603 invalidate=1 00:32:39.603 rw=write 00:32:39.603 time_based=1 00:32:39.603 runtime=1 00:32:39.603 ioengine=libaio 00:32:39.603 direct=1 00:32:39.603 bs=4096 00:32:39.603 iodepth=1 00:32:39.603 norandommap=0 00:32:39.603 numjobs=1 00:32:39.603 00:32:39.603 verify_dump=1 00:32:39.603 verify_backlog=512 00:32:39.603 verify_state_save=0 00:32:39.603 do_verify=1 00:32:39.603 verify=crc32c-intel 00:32:39.603 [job0] 00:32:39.603 filename=/dev/nvme0n1 00:32:39.603 Could not set queue depth (nvme0n1) 00:32:39.861 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:39.861 fio-3.35 00:32:39.861 Starting 1 thread 00:32:41.237 00:32:41.237 job0: (groupid=0, jobs=1): err= 0: pid=649049: Tue Oct 8 18:40:34 2024 00:32:41.237 read: IOPS=2362, BW=9451KiB/s (9677kB/s)(9460KiB/1001msec) 00:32:41.237 slat (nsec): min=7433, max=43330, avg=8505.16, stdev=1743.40 00:32:41.237 clat (usec): min=170, max=273, avg=219.85, stdev=20.05 00:32:41.237 lat (usec): min=196, max=316, avg=228.36, stdev=20.03 00:32:41.237 clat percentiles (usec): 00:32:41.237 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 202], 00:32:41.237 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:32:41.237 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 249], 95.00th=[ 251], 00:32:41.237 | 99.00th=[ 255], 99.50th=[ 258], 99.90th=[ 260], 99.95th=[ 269], 00:32:41.237 | 99.99th=[ 273] 00:32:41.237 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:41.237 slat (usec): min=10, max=25957, avg=22.53, stdev=512.92 00:32:41.237 clat (usec): min=116, max=281, avg=150.16, stdev=33.31 00:32:41.237 lat (usec): min=133, max=26216, avg=172.69, stdev=516.21 00:32:41.237 clat percentiles (usec): 00:32:41.237 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 135], 00:32:41.237 | 30.00th=[ 137], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 139], 00:32:41.237 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 235], 95.00th=[ 241], 00:32:41.237 | 99.00th=[ 253], 99.50th=[ 258], 99.90th=[ 269], 99.95th=[ 269], 00:32:41.237 | 99.99th=[ 281] 00:32:41.237 bw ( KiB/s): min=11336, max=11336, per=100.00%, avg=11336.00, stdev= 0.00, samples=1 00:32:41.237 iops : min= 2834, max= 2834, avg=2834.00, stdev= 0.00, samples=1 00:32:41.237 lat (usec) : 250=96.35%, 500=3.65% 00:32:41.237 cpu : usr=4.00%, sys=8.20%, ctx=4929, majf=0, minf=1 00:32:41.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:41.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.237 issued rwts: total=2365,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:41.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:41.237 00:32:41.237 Run status group 0 (all jobs): 00:32:41.237 READ: bw=9451KiB/s (9677kB/s), 9451KiB/s-9451KiB/s (9677kB/s-9677kB/s), io=9460KiB (9687kB), run=1001-1001msec 00:32:41.237 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:32:41.237 00:32:41.237 Disk stats (read/write): 00:32:41.237 nvme0n1: ios=2095/2429, merge=0/0, ticks=563/344, in_queue=907, util=98.70% 00:32:41.237 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:41.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:41.238 rmmod nvme_tcp 00:32:41.238 rmmod nvme_fabrics 00:32:41.238 rmmod nvme_keyring 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 648332 ']' 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 648332 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 648332 ']' 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 648332 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:41.238 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 648332 00:32:41.496 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:41.496 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:41.496 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 648332' 00:32:41.496 killing process with pid 648332 00:32:41.496 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 648332 00:32:41.496 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 648332 00:32:41.496 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:41.496 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:41.496 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:41.496 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:41.496 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:32:41.496 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:41.496 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:32:41.496 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:41.496 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:41.496 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.496 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:41.496 18:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.143 18:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:44.143 00:32:44.143 real 0m13.632s 00:32:44.143 user 0m24.189s 00:32:44.143 sys 0m6.024s 00:32:44.143 18:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:44.143 18:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:44.143 ************************************ 00:32:44.143 END TEST nvmf_nmic 00:32:44.143 ************************************ 00:32:44.143 18:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:44.143 18:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:44.143 18:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:44.143 18:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:44.143 ************************************ 00:32:44.143 START TEST nvmf_fio_target 00:32:44.143 ************************************ 00:32:44.143 18:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:44.143 * Looking for test storage... 00:32:44.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:44.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.143 --rc genhtml_branch_coverage=1 00:32:44.143 --rc genhtml_function_coverage=1 00:32:44.143 --rc genhtml_legend=1 00:32:44.143 --rc geninfo_all_blocks=1 00:32:44.143 --rc geninfo_unexecuted_blocks=1 00:32:44.143 00:32:44.143 ' 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:44.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.143 --rc genhtml_branch_coverage=1 00:32:44.143 --rc genhtml_function_coverage=1 00:32:44.143 --rc genhtml_legend=1 00:32:44.143 --rc geninfo_all_blocks=1 00:32:44.143 --rc geninfo_unexecuted_blocks=1 00:32:44.143 00:32:44.143 ' 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:44.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.143 --rc genhtml_branch_coverage=1 00:32:44.143 --rc genhtml_function_coverage=1 00:32:44.143 --rc genhtml_legend=1 00:32:44.143 --rc geninfo_all_blocks=1 00:32:44.143 --rc geninfo_unexecuted_blocks=1 00:32:44.143 00:32:44.143 ' 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:44.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.143 --rc genhtml_branch_coverage=1 00:32:44.143 --rc genhtml_function_coverage=1 00:32:44.143 --rc genhtml_legend=1 00:32:44.143 --rc geninfo_all_blocks=1 00:32:44.143 --rc geninfo_unexecuted_blocks=1 00:32:44.143 00:32:44.143 ' 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.143 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:44.144 18:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:50.714 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:50.714 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.714 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:50.715 Found net devices under 0000:86:00.0: cvl_0_0 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:50.715 Found net devices under 0000:86:00.1: cvl_0_1 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:50.715 18:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:50.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:50.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:32:50.715 00:32:50.715 --- 10.0.0.2 ping statistics --- 00:32:50.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.715 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:50.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:50.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:32:50.715 00:32:50.715 --- 10.0.0.1 ping statistics --- 00:32:50.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.715 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=652709 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 652709 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 652709 ']' 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:50.715 [2024-10-08 18:40:43.135844] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:50.715 [2024-10-08 18:40:43.136835] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:32:50.715 [2024-10-08 18:40:43.136876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.715 [2024-10-08 18:40:43.210646] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:50.715 [2024-10-08 18:40:43.290526] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.715 [2024-10-08 18:40:43.290563] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.715 [2024-10-08 18:40:43.290570] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.715 [2024-10-08 18:40:43.290576] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.715 [2024-10-08 18:40:43.290582] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.715 [2024-10-08 18:40:43.292160] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.715 [2024-10-08 18:40:43.292192] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:50.715 [2024-10-08 18:40:43.292298] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.715 [2024-10-08 18:40:43.292298] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:32:50.715 [2024-10-08 18:40:43.383194] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:50.715 [2024-10-08 18:40:43.383363] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:50.715 [2024-10-08 18:40:43.383881] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:50.715 [2024-10-08 18:40:43.384185] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:50.715 [2024-10-08 18:40:43.384248] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:50.715 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:50.716 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:32:50.716 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:50.716 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:50.716 18:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:50.716 18:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:50.716 18:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:50.974 [2024-10-08 18:40:44.177098] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:50.974 18:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:51.233 18:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:51.233 18:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:51.491 18:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:51.491 18:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:51.750 18:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:51.750 18:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:51.750 18:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:51.750 18:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:52.008 18:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:52.267 18:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:52.267 18:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:52.525 18:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:52.525 18:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:52.525 18:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:52.525 18:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:52.783 18:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:53.041 18:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:53.041 18:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:53.301 18:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:53.301 18:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:53.301 18:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:53.560 [2024-10-08 18:40:46.753019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:53.560 18:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:53.818 18:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:54.077 18:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:54.336 18:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:54.336 18:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:32:54.336 18:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:54.336 18:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:32:54.336 18:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:32:54.336 18:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:32:56.239 18:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:56.239 18:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:56.239 18:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:56.239 18:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:32:56.239 18:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:56.239 18:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:32:56.239 18:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:56.239 [global] 00:32:56.239 thread=1 00:32:56.239 invalidate=1 00:32:56.239 rw=write 00:32:56.239 time_based=1 00:32:56.239 runtime=1 00:32:56.239 ioengine=libaio 00:32:56.239 direct=1 00:32:56.239 bs=4096 00:32:56.239 iodepth=1 00:32:56.239 norandommap=0 00:32:56.239 numjobs=1 00:32:56.239 00:32:56.239 verify_dump=1 00:32:56.239 verify_backlog=512 00:32:56.239 verify_state_save=0 00:32:56.239 do_verify=1 00:32:56.239 verify=crc32c-intel 00:32:56.239 [job0] 00:32:56.239 filename=/dev/nvme0n1 00:32:56.239 [job1] 00:32:56.239 filename=/dev/nvme0n2 00:32:56.239 [job2] 00:32:56.239 filename=/dev/nvme0n3 00:32:56.239 [job3] 00:32:56.239 filename=/dev/nvme0n4 00:32:56.239 Could not set queue depth (nvme0n1) 00:32:56.239 Could not set queue depth (nvme0n2) 00:32:56.239 Could not set queue depth (nvme0n3) 00:32:56.239 Could not set queue depth (nvme0n4) 00:32:56.497 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:56.497 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:56.497 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:56.497 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:56.497 fio-3.35 00:32:56.497 Starting 4 threads 00:32:57.875 00:32:57.875 job0: (groupid=0, jobs=1): err= 0: pid=654048: Tue Oct 8 18:40:51 2024 00:32:57.875 read: IOPS=1000, BW=4000KiB/s (4096kB/s)(4140KiB/1035msec) 00:32:57.875 slat (nsec): min=7025, max=22818, avg=8255.71, stdev=1204.41 00:32:57.875 clat (usec): min=220, max=41109, avg=693.13, stdev=4179.38 00:32:57.875 lat (usec): min=228, max=41120, avg=701.39, stdev=4179.55 00:32:57.875 clat percentiles (usec): 00:32:57.875 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 245], 00:32:57.875 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 258], 00:32:57.875 | 70.00th=[ 262], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 289], 00:32:57.875 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:57.875 | 99.99th=[41157] 00:32:57.875 write: IOPS=1484, BW=5936KiB/s (6079kB/s)(6144KiB/1035msec); 0 zone resets 00:32:57.875 slat (nsec): min=9044, max=43510, avg=12163.72, stdev=2580.39 00:32:57.875 clat (usec): min=117, max=3160, avg=183.70, stdev=126.71 00:32:57.875 lat (usec): min=128, max=3197, avg=195.87, stdev=127.63 00:32:57.875 clat percentiles (usec): 00:32:57.875 | 1.00th=[ 126], 5.00th=[ 133], 10.00th=[ 149], 20.00th=[ 161], 00:32:57.875 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:32:57.875 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 237], 95.00th=[ 241], 00:32:57.875 | 99.00th=[ 253], 99.50th=[ 289], 99.90th=[ 3032], 99.95th=[ 3163], 00:32:57.875 | 99.99th=[ 3163] 00:32:57.875 bw ( KiB/s): min= 3864, max= 8424, per=34.20%, avg=6144.00, stdev=3224.41, samples=2 00:32:57.875 iops : min= 966, max= 2106, avg=1536.00, stdev=806.10, samples=2 00:32:57.875 lat (usec) : 250=73.05%, 500=26.41% 00:32:57.875 lat (msec) : 4=0.12%, 50=0.43% 00:32:57.875 cpu : usr=1.84%, sys=3.97%, ctx=2572, majf=0, minf=1 00:32:57.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:57.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.875 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:57.875 job1: (groupid=0, jobs=1): err= 0: pid=654050: Tue Oct 8 18:40:51 2024 00:32:57.875 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:32:57.875 slat (nsec): min=7117, max=24094, avg=8569.18, stdev=2496.37 00:32:57.875 clat (usec): min=171, max=41240, avg=1701.32, stdev=7722.13 00:32:57.875 lat (usec): min=179, max=41250, avg=1709.89, stdev=7723.81 00:32:57.875 clat percentiles (usec): 00:32:57.875 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 182], 00:32:57.875 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:32:57.875 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 196], 95.00th=[ 245], 00:32:57.875 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:57.875 | 99.99th=[41157] 00:32:57.875 write: IOPS=560, BW=2242KiB/s (2296kB/s)(2244KiB/1001msec); 0 zone resets 00:32:57.875 slat (nsec): min=10679, max=69202, avg=13356.33, stdev=4859.90 00:32:57.875 clat (usec): min=128, max=3223, avg=202.13, stdev=132.63 00:32:57.875 lat (usec): min=139, max=3238, avg=215.48, stdev=133.08 00:32:57.875 clat percentiles (usec): 00:32:57.875 | 1.00th=[ 130], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 174], 00:32:57.875 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 196], 00:32:57.875 | 70.00th=[ 208], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 249], 00:32:57.875 | 99.00th=[ 310], 99.50th=[ 334], 99.90th=[ 3228], 99.95th=[ 3228], 00:32:57.875 | 99.99th=[ 3228] 00:32:57.875 bw ( KiB/s): min= 4096, max= 4096, per=22.80%, avg=4096.00, stdev= 0.00, samples=1 00:32:57.875 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:57.875 lat (usec) : 250=95.34%, 500=2.70%, 750=0.09% 00:32:57.875 lat (msec) : 4=0.09%, 50=1.77% 00:32:57.875 cpu : usr=0.60%, sys=2.10%, ctx=1074, majf=0, minf=1 00:32:57.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:57.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.875 issued rwts: total=512,561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:57.875 job2: (groupid=0, jobs=1): err= 0: pid=654052: Tue Oct 8 18:40:51 2024 00:32:57.875 read: IOPS=1574, BW=6299KiB/s (6450kB/s)(6532KiB/1037msec) 00:32:57.875 slat (nsec): min=8063, max=44567, avg=10093.74, stdev=1732.74 00:32:57.875 clat (usec): min=190, max=41531, avg=383.36, stdev=2470.96 00:32:57.875 lat (usec): min=198, max=41543, avg=393.45, stdev=2471.61 00:32:57.875 clat percentiles (usec): 00:32:57.875 | 1.00th=[ 196], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 204], 00:32:57.875 | 30.00th=[ 212], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 243], 00:32:57.875 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 265], 00:32:57.875 | 99.00th=[ 437], 99.50th=[ 474], 99.90th=[41157], 99.95th=[41681], 00:32:57.875 | 99.99th=[41681] 00:32:57.875 write: IOPS=1974, BW=7900KiB/s (8089kB/s)(8192KiB/1037msec); 0 zone resets 00:32:57.875 slat (nsec): min=12582, max=81731, avg=14200.44, stdev=2601.00 00:32:57.875 clat (usec): min=139, max=524, avg=171.64, stdev=25.15 00:32:57.875 lat (usec): min=152, max=538, avg=185.84, stdev=25.76 00:32:57.875 clat percentiles (usec): 00:32:57.875 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 147], 20.00th=[ 151], 00:32:57.875 | 30.00th=[ 155], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:32:57.875 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 206], 00:32:57.875 | 99.00th=[ 260], 99.50th=[ 281], 99.90th=[ 453], 99.95th=[ 482], 00:32:57.875 | 99.99th=[ 523] 00:32:57.875 bw ( KiB/s): min= 7120, max= 9264, per=45.60%, avg=8192.00, stdev=1516.04, samples=2 00:32:57.875 iops : min= 1780, max= 2316, avg=2048.00, stdev=379.01, samples=2 00:32:57.875 lat (usec) : 250=92.23%, 500=7.58%, 750=0.03% 00:32:57.875 lat (msec) : 50=0.16% 00:32:57.875 cpu : usr=3.96%, sys=6.18%, ctx=3681, majf=0, minf=1 00:32:57.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:57.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.875 issued rwts: total=1633,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:57.875 job3: (groupid=0, jobs=1): err= 0: pid=654053: Tue Oct 8 18:40:51 2024 00:32:57.875 read: IOPS=470, BW=1884KiB/s (1929kB/s)(1912KiB/1015msec) 00:32:57.875 slat (nsec): min=7153, max=45936, avg=8580.86, stdev=3014.19 00:32:57.875 clat (usec): min=181, max=41154, avg=1886.20, stdev=8047.19 00:32:57.875 lat (usec): min=189, max=41162, avg=1894.78, stdev=8048.71 00:32:57.875 clat percentiles (usec): 00:32:57.875 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 186], 20.00th=[ 188], 00:32:57.875 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 215], 00:32:57.875 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 277], 00:32:57.875 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:57.875 | 99.99th=[41157] 00:32:57.875 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:32:57.875 slat (nsec): min=9521, max=62416, avg=11935.38, stdev=2845.52 00:32:57.875 clat (usec): min=148, max=1885, avg=194.46, stdev=80.69 00:32:57.875 lat (usec): min=159, max=1897, avg=206.40, stdev=81.00 00:32:57.875 clat percentiles (usec): 00:32:57.875 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:32:57.875 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 186], 00:32:57.875 | 70.00th=[ 194], 80.00th=[ 204], 90.00th=[ 241], 95.00th=[ 253], 00:32:57.875 | 99.00th=[ 285], 99.50th=[ 343], 99.90th=[ 1893], 99.95th=[ 1893], 00:32:57.875 | 99.99th=[ 1893] 00:32:57.875 bw ( KiB/s): min= 4096, max= 4096, per=22.80%, avg=4096.00, stdev= 0.00, samples=1 00:32:57.875 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:57.875 lat (usec) : 250=88.18%, 500=9.70% 00:32:57.875 lat (msec) : 2=0.10%, 50=2.02% 00:32:57.875 cpu : usr=0.39%, sys=1.97%, ctx=990, majf=0, minf=2 00:32:57.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:57.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.875 issued rwts: total=478,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:57.875 00:32:57.875 Run status group 0 (all jobs): 00:32:57.875 READ: bw=13.8MiB/s (14.4MB/s), 1884KiB/s-6299KiB/s (1929kB/s-6450kB/s), io=14.3MiB (15.0MB), run=1001-1037msec 00:32:57.876 WRITE: bw=17.5MiB/s (18.4MB/s), 2018KiB/s-7900KiB/s (2066kB/s-8089kB/s), io=18.2MiB (19.1MB), run=1001-1037msec 00:32:57.876 00:32:57.876 Disk stats (read/write): 00:32:57.876 nvme0n1: ios=1060/1536, merge=0/0, ticks=1447/270, in_queue=1717, util=97.39% 00:32:57.876 nvme0n2: ios=40/512, merge=0/0, ticks=1634/98, in_queue=1732, util=97.63% 00:32:57.876 nvme0n3: ios=1627/2048, merge=0/0, ticks=362/302, in_queue=664, util=87.62% 00:32:57.876 nvme0n4: ios=85/512, merge=0/0, ticks=698/95, in_queue=793, util=89.28% 00:32:57.876 18:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:57.876 [global] 00:32:57.876 thread=1 00:32:57.876 invalidate=1 00:32:57.876 rw=randwrite 00:32:57.876 time_based=1 00:32:57.876 runtime=1 00:32:57.876 ioengine=libaio 00:32:57.876 direct=1 00:32:57.876 bs=4096 00:32:57.876 iodepth=1 00:32:57.876 norandommap=0 00:32:57.876 numjobs=1 00:32:57.876 00:32:57.876 verify_dump=1 00:32:57.876 verify_backlog=512 00:32:57.876 verify_state_save=0 00:32:57.876 do_verify=1 00:32:57.876 verify=crc32c-intel 00:32:57.876 [job0] 00:32:57.876 filename=/dev/nvme0n1 00:32:57.876 [job1] 00:32:57.876 filename=/dev/nvme0n2 00:32:57.876 [job2] 00:32:57.876 filename=/dev/nvme0n3 00:32:57.876 [job3] 00:32:57.876 filename=/dev/nvme0n4 00:32:57.876 Could not set queue depth (nvme0n1) 00:32:57.876 Could not set queue depth (nvme0n2) 00:32:57.876 Could not set queue depth (nvme0n3) 00:32:57.876 Could not set queue depth (nvme0n4) 00:32:58.134 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.134 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.134 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.134 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.134 fio-3.35 00:32:58.134 Starting 4 threads 00:32:59.509 00:32:59.509 job0: (groupid=0, jobs=1): err= 0: pid=654426: Tue Oct 8 18:40:52 2024 00:32:59.509 read: IOPS=27, BW=112KiB/s (114kB/s)(112KiB/1002msec) 00:32:59.509 slat (nsec): min=8603, max=23774, avg=19480.25, stdev=5988.41 00:32:59.509 clat (usec): min=285, max=41226, avg=32199.05, stdev=16944.82 00:32:59.509 lat (usec): min=307, max=41247, avg=32218.53, stdev=16944.50 00:32:59.509 clat percentiles (usec): 00:32:59.509 | 1.00th=[ 285], 5.00th=[ 310], 10.00th=[ 322], 20.00th=[ 388], 00:32:59.509 | 30.00th=[40633], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:32:59.509 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:59.509 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:59.509 | 99.99th=[41157] 00:32:59.509 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:32:59.509 slat (nsec): min=9374, max=45941, avg=11217.28, stdev=2774.47 00:32:59.509 clat (usec): min=150, max=314, avg=174.70, stdev=15.82 00:32:59.509 lat (usec): min=161, max=325, avg=185.92, stdev=16.60 00:32:59.509 clat percentiles (usec): 00:32:59.509 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 165], 00:32:59.509 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:32:59.509 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 198], 00:32:59.509 | 99.00th=[ 235], 99.50th=[ 269], 99.90th=[ 314], 99.95th=[ 314], 00:32:59.509 | 99.99th=[ 314] 00:32:59.509 bw ( KiB/s): min= 4096, max= 4096, per=20.34%, avg=4096.00, stdev= 0.00, samples=1 00:32:59.509 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:59.509 lat (usec) : 250=94.26%, 500=1.67% 00:32:59.509 lat (msec) : 50=4.07% 00:32:59.509 cpu : usr=0.00%, sys=0.80%, ctx=542, majf=0, minf=1 00:32:59.509 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.509 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.509 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:59.509 job1: (groupid=0, jobs=1): err= 0: pid=654427: Tue Oct 8 18:40:52 2024 00:32:59.509 read: IOPS=2318, BW=9275KiB/s (9497kB/s)(9284KiB/1001msec) 00:32:59.509 slat (nsec): min=6011, max=29898, avg=7109.62, stdev=1043.20 00:32:59.509 clat (usec): min=183, max=522, avg=245.74, stdev=24.78 00:32:59.509 lat (usec): min=190, max=528, avg=252.85, stdev=24.78 00:32:59.509 clat percentiles (usec): 00:32:59.509 | 1.00th=[ 212], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 233], 00:32:59.509 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:32:59.509 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 258], 95.00th=[ 262], 00:32:59.509 | 99.00th=[ 326], 99.50th=[ 457], 99.90th=[ 502], 99.95th=[ 510], 00:32:59.509 | 99.99th=[ 523] 00:32:59.509 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:59.509 slat (nsec): min=8741, max=39631, avg=9896.62, stdev=1232.26 00:32:59.509 clat (usec): min=115, max=316, avg=147.63, stdev=24.48 00:32:59.509 lat (usec): min=124, max=356, avg=157.52, stdev=24.76 00:32:59.509 clat percentiles (usec): 00:32:59.509 | 1.00th=[ 121], 5.00th=[ 124], 10.00th=[ 126], 20.00th=[ 129], 00:32:59.509 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 143], 60.00th=[ 145], 00:32:59.509 | 70.00th=[ 149], 80.00th=[ 163], 90.00th=[ 188], 95.00th=[ 198], 00:32:59.509 | 99.00th=[ 223], 99.50th=[ 235], 99.90th=[ 277], 99.95th=[ 314], 00:32:59.509 | 99.99th=[ 318] 00:32:59.509 bw ( KiB/s): min=11280, max=11280, per=56.01%, avg=11280.00, stdev= 0.00, samples=1 00:32:59.509 iops : min= 2820, max= 2820, avg=2820.00, stdev= 0.00, samples=1 00:32:59.509 lat (usec) : 250=84.35%, 500=15.59%, 750=0.06% 00:32:59.509 cpu : usr=2.60%, sys=4.00%, ctx=4881, majf=0, minf=2 00:32:59.509 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.509 issued rwts: total=2321,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.509 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:59.509 job2: (groupid=0, jobs=1): err= 0: pid=654428: Tue Oct 8 18:40:52 2024 00:32:59.509 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:32:59.509 slat (nsec): min=11028, max=40217, avg=19949.36, stdev=7991.30 00:32:59.509 clat (usec): min=40880, max=41205, avg=40990.43, stdev=86.22 00:32:59.509 lat (usec): min=40896, max=41216, avg=41010.38, stdev=84.05 00:32:59.509 clat percentiles (usec): 00:32:59.509 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:59.509 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:59.509 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:59.509 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:59.509 | 99.99th=[41157] 00:32:59.509 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:32:59.509 slat (nsec): min=11504, max=48158, avg=14276.41, stdev=4074.33 00:32:59.509 clat (usec): min=143, max=288, avg=183.89, stdev=13.02 00:32:59.509 lat (usec): min=168, max=328, avg=198.16, stdev=13.80 00:32:59.509 clat percentiles (usec): 00:32:59.509 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 176], 00:32:59.509 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:32:59.509 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 204], 00:32:59.509 | 99.00th=[ 221], 99.50th=[ 260], 99.90th=[ 289], 99.95th=[ 289], 00:32:59.509 | 99.99th=[ 289] 00:32:59.509 bw ( KiB/s): min= 4096, max= 4096, per=20.34%, avg=4096.00, stdev= 0.00, samples=1 00:32:59.509 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:59.509 lat (usec) : 250=95.32%, 500=0.56% 00:32:59.509 lat (msec) : 50=4.12% 00:32:59.509 cpu : usr=0.60%, sys=0.79%, ctx=535, majf=0, minf=1 00:32:59.509 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.509 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.509 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:59.509 job3: (groupid=0, jobs=1): err= 0: pid=654429: Tue Oct 8 18:40:52 2024 00:32:59.509 read: IOPS=1023, BW=4094KiB/s (4193kB/s)(4164KiB/1017msec) 00:32:59.509 slat (nsec): min=7302, max=40689, avg=8541.81, stdev=1594.54 00:32:59.509 clat (usec): min=192, max=41362, avg=708.21, stdev=4349.46 00:32:59.510 lat (usec): min=200, max=41371, avg=716.76, stdev=4349.76 00:32:59.510 clat percentiles (usec): 00:32:59.510 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 219], 00:32:59.510 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 239], 60.00th=[ 245], 00:32:59.510 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 269], 00:32:59.510 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:59.510 | 99.99th=[41157] 00:32:59.510 write: IOPS=1510, BW=6041KiB/s (6186kB/s)(6144KiB/1017msec); 0 zone resets 00:32:59.510 slat (nsec): min=9961, max=47793, avg=11396.39, stdev=2022.78 00:32:59.510 clat (usec): min=127, max=318, avg=159.80, stdev=26.43 00:32:59.510 lat (usec): min=137, max=366, avg=171.19, stdev=27.05 00:32:59.510 clat percentiles (usec): 00:32:59.510 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:32:59.510 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 155], 00:32:59.510 | 70.00th=[ 172], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 208], 00:32:59.510 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 310], 99.95th=[ 318], 00:32:59.510 | 99.99th=[ 318] 00:32:59.510 bw ( KiB/s): min= 1136, max=11152, per=30.51%, avg=6144.00, stdev=7082.38, samples=2 00:32:59.510 iops : min= 284, max= 2788, avg=1536.00, stdev=1770.60, samples=2 00:32:59.510 lat (usec) : 250=87.97%, 500=11.56% 00:32:59.510 lat (msec) : 50=0.47% 00:32:59.510 cpu : usr=1.97%, sys=4.13%, ctx=2577, majf=0, minf=2 00:32:59.510 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.510 issued rwts: total=1041,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.510 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:59.510 00:32:59.510 Run status group 0 (all jobs): 00:32:59.510 READ: bw=13.1MiB/s (13.7MB/s), 87.2KiB/s-9275KiB/s (89.3kB/s-9497kB/s), io=13.3MiB (14.0MB), run=1001-1017msec 00:32:59.510 WRITE: bw=19.7MiB/s (20.6MB/s), 2030KiB/s-9.99MiB/s (2078kB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1001-1017msec 00:32:59.510 00:32:59.510 Disk stats (read/write): 00:32:59.510 nvme0n1: ios=60/512, merge=0/0, ticks=1656/89, in_queue=1745, util=100.00% 00:32:59.510 nvme0n2: ios=2027/2048, merge=0/0, ticks=492/295, in_queue=787, util=86.88% 00:32:59.510 nvme0n3: ios=57/512, merge=0/0, ticks=1784/74, in_queue=1858, util=95.93% 00:32:59.510 nvme0n4: ios=1037/1536, merge=0/0, ticks=564/217, in_queue=781, util=89.71% 00:32:59.510 18:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:59.510 [global] 00:32:59.510 thread=1 00:32:59.510 invalidate=1 00:32:59.510 rw=write 00:32:59.510 time_based=1 00:32:59.510 runtime=1 00:32:59.510 ioengine=libaio 00:32:59.510 direct=1 00:32:59.510 bs=4096 00:32:59.510 iodepth=128 00:32:59.510 norandommap=0 00:32:59.510 numjobs=1 00:32:59.510 00:32:59.510 verify_dump=1 00:32:59.510 verify_backlog=512 00:32:59.510 verify_state_save=0 00:32:59.510 do_verify=1 00:32:59.510 verify=crc32c-intel 00:32:59.510 [job0] 00:32:59.510 filename=/dev/nvme0n1 00:32:59.510 [job1] 00:32:59.510 filename=/dev/nvme0n2 00:32:59.510 [job2] 00:32:59.510 filename=/dev/nvme0n3 00:32:59.510 [job3] 00:32:59.510 filename=/dev/nvme0n4 00:32:59.510 Could not set queue depth (nvme0n1) 00:32:59.510 Could not set queue depth (nvme0n2) 00:32:59.510 Could not set queue depth (nvme0n3) 00:32:59.510 Could not set queue depth (nvme0n4) 00:32:59.768 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:59.768 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:59.768 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:59.768 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:59.768 fio-3.35 00:32:59.768 Starting 4 threads 00:33:01.144 00:33:01.144 job0: (groupid=0, jobs=1): err= 0: pid=654795: Tue Oct 8 18:40:54 2024 00:33:01.144 read: IOPS=1151, BW=4607KiB/s (4717kB/s)(4828KiB/1048msec) 00:33:01.144 slat (nsec): min=1604, max=34661k, avg=295646.29, stdev=1818232.55 00:33:01.144 clat (msec): min=13, max=119, avg=43.26, stdev=24.22 00:33:01.144 lat (msec): min=13, max=119, avg=43.56, stdev=24.33 00:33:01.144 clat percentiles (msec): 00:33:01.145 | 1.00th=[ 14], 5.00th=[ 15], 10.00th=[ 15], 20.00th=[ 19], 00:33:01.145 | 30.00th=[ 32], 40.00th=[ 36], 50.00th=[ 42], 60.00th=[ 44], 00:33:01.145 | 70.00th=[ 48], 80.00th=[ 58], 90.00th=[ 79], 95.00th=[ 91], 00:33:01.145 | 99.00th=[ 106], 99.50th=[ 120], 99.90th=[ 120], 99.95th=[ 120], 00:33:01.145 | 99.99th=[ 120] 00:33:01.145 write: IOPS=1465, BW=5863KiB/s (6003kB/s)(6144KiB/1048msec); 0 zone resets 00:33:01.145 slat (usec): min=2, max=25419, avg=414.83, stdev=2308.97 00:33:01.145 clat (msec): min=17, max=135, avg=51.36, stdev=25.32 00:33:01.145 lat (msec): min=17, max=135, avg=51.77, stdev=25.52 00:33:01.145 clat percentiles (msec): 00:33:01.145 | 1.00th=[ 21], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 36], 00:33:01.145 | 30.00th=[ 40], 40.00th=[ 44], 50.00th=[ 50], 60.00th=[ 52], 00:33:01.145 | 70.00th=[ 55], 80.00th=[ 58], 90.00th=[ 81], 95.00th=[ 120], 00:33:01.145 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 136], 00:33:01.145 | 99.99th=[ 136] 00:33:01.145 bw ( KiB/s): min= 4295, max= 7984, per=9.05%, avg=6139.50, stdev=2608.52, samples=2 00:33:01.145 iops : min= 1073, max= 1996, avg=1534.50, stdev=652.66, samples=2 00:33:01.145 lat (msec) : 20=10.83%, 50=49.51%, 100=33.14%, 250=6.53% 00:33:01.145 cpu : usr=1.05%, sys=2.39%, ctx=144, majf=0, minf=1 00:33:01.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:33:01.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:01.145 issued rwts: total=1207,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:01.145 job1: (groupid=0, jobs=1): err= 0: pid=654796: Tue Oct 8 18:40:54 2024 00:33:01.145 read: IOPS=5895, BW=23.0MiB/s (24.1MB/s)(23.1MiB/1001msec) 00:33:01.145 slat (nsec): min=1515, max=4248.9k, avg=81663.99, stdev=437460.37 00:33:01.145 clat (usec): min=528, max=14567, avg=10422.68, stdev=1284.65 00:33:01.145 lat (usec): min=3543, max=14570, avg=10504.35, stdev=1306.34 00:33:01.145 clat percentiles (usec): 00:33:01.145 | 1.00th=[ 7046], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9634], 00:33:01.145 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:33:01.145 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11863], 95.00th=[12518], 00:33:01.145 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13960], 99.95th=[14615], 00:33:01.145 | 99.99th=[14615] 00:33:01.145 write: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec); 0 zone resets 00:33:01.145 slat (usec): min=2, max=11362, avg=79.25, stdev=423.19 00:33:01.145 clat (usec): min=6769, max=25051, avg=10616.53, stdev=1686.62 00:33:01.145 lat (usec): min=6775, max=25058, avg=10695.79, stdev=1718.89 00:33:01.145 clat percentiles (usec): 00:33:01.145 | 1.00th=[ 7439], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10028], 00:33:01.145 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10290], 60.00th=[10421], 00:33:01.145 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11600], 95.00th=[12780], 00:33:01.145 | 99.00th=[20841], 99.50th=[22676], 99.90th=[24249], 99.95th=[24511], 00:33:01.145 | 99.99th=[25035] 00:33:01.145 bw ( KiB/s): min=24526, max=24526, per=36.14%, avg=24526.00, stdev= 0.00, samples=1 00:33:01.145 iops : min= 6131, max= 6131, avg=6131.00, stdev= 0.00, samples=1 00:33:01.145 lat (usec) : 750=0.01% 00:33:01.145 lat (msec) : 4=0.35%, 10=24.11%, 20=74.91%, 50=0.62% 00:33:01.145 cpu : usr=3.50%, sys=6.80%, ctx=650, majf=0, minf=1 00:33:01.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:01.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:01.145 issued rwts: total=5901,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:01.145 job2: (groupid=0, jobs=1): err= 0: pid=654798: Tue Oct 8 18:40:54 2024 00:33:01.145 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:33:01.145 slat (nsec): min=1542, max=13410k, avg=117861.42, stdev=883682.41 00:33:01.145 clat (usec): min=5764, max=46007, avg=15641.86, stdev=4301.64 00:33:01.145 lat (usec): min=5769, max=46014, avg=15759.72, stdev=4370.84 00:33:01.145 clat percentiles (usec): 00:33:01.145 | 1.00th=[ 9896], 5.00th=[11338], 10.00th=[12387], 20.00th=[13173], 00:33:01.145 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14615], 60.00th=[14877], 00:33:01.145 | 70.00th=[16319], 80.00th=[17957], 90.00th=[19792], 95.00th=[22938], 00:33:01.145 | 99.00th=[36439], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:33:01.145 | 99.99th=[45876] 00:33:01.145 write: IOPS=4431, BW=17.3MiB/s (18.2MB/s)(17.4MiB/1008msec); 0 zone resets 00:33:01.145 slat (usec): min=2, max=13054, avg=106.02, stdev=844.76 00:33:01.145 clat (usec): min=1563, max=45991, avg=14274.54, stdev=5337.06 00:33:01.145 lat (usec): min=1575, max=49094, avg=14380.56, stdev=5389.38 00:33:01.145 clat percentiles (usec): 00:33:01.145 | 1.00th=[ 2073], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[10945], 00:33:01.145 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12649], 60.00th=[13435], 00:33:01.145 | 70.00th=[15008], 80.00th=[17957], 90.00th=[20841], 95.00th=[24773], 00:33:01.145 | 99.00th=[35914], 99.50th=[37487], 99.90th=[43254], 99.95th=[43254], 00:33:01.145 | 99.99th=[45876] 00:33:01.145 bw ( KiB/s): min=16128, max=18546, per=25.55%, avg=17337.00, stdev=1709.78, samples=2 00:33:01.145 iops : min= 4032, max= 4636, avg=4334.00, stdev=427.09, samples=2 00:33:01.145 lat (msec) : 2=0.44%, 4=0.20%, 10=5.92%, 20=83.31%, 50=10.12% 00:33:01.145 cpu : usr=3.08%, sys=6.65%, ctx=232, majf=0, minf=1 00:33:01.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:33:01.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:01.145 issued rwts: total=4096,4467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:01.145 job3: (groupid=0, jobs=1): err= 0: pid=654799: Tue Oct 8 18:40:54 2024 00:33:01.145 read: IOPS=5460, BW=21.3MiB/s (22.4MB/s)(21.5MiB/1008msec) 00:33:01.145 slat (nsec): min=1384, max=10748k, avg=92388.20, stdev=765442.20 00:33:01.145 clat (usec): min=1192, max=21795, avg=12070.21, stdev=2840.30 00:33:01.145 lat (usec): min=3654, max=21799, avg=12162.60, stdev=2896.42 00:33:01.145 clat percentiles (usec): 00:33:01.145 | 1.00th=[ 6980], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[10290], 00:33:01.145 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:33:01.145 | 70.00th=[11994], 80.00th=[13566], 90.00th=[16712], 95.00th=[18482], 00:33:01.145 | 99.00th=[20579], 99.50th=[21103], 99.90th=[21627], 99.95th=[21890], 00:33:01.145 | 99.99th=[21890] 00:33:01.145 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:33:01.145 slat (usec): min=2, max=9509, avg=79.51, stdev=608.35 00:33:01.145 clat (usec): min=984, max=21408, avg=10853.03, stdev=2586.51 00:33:01.145 lat (usec): min=1720, max=22887, avg=10932.54, stdev=2620.21 00:33:01.145 clat percentiles (usec): 00:33:01.145 | 1.00th=[ 4178], 5.00th=[ 6915], 10.00th=[ 7373], 20.00th=[ 9110], 00:33:01.145 | 30.00th=[10159], 40.00th=[10683], 50.00th=[10945], 60.00th=[11338], 00:33:01.145 | 70.00th=[11600], 80.00th=[11863], 90.00th=[14877], 95.00th=[15533], 00:33:01.145 | 99.00th=[17171], 99.50th=[18744], 99.90th=[20841], 99.95th=[21103], 00:33:01.145 | 99.99th=[21365] 00:33:01.145 bw ( KiB/s): min=21536, max=23473, per=33.16%, avg=22504.50, stdev=1369.67, samples=2 00:33:01.145 iops : min= 5384, max= 5868, avg=5626.00, stdev=342.24, samples=2 00:33:01.145 lat (usec) : 1000=0.01% 00:33:01.145 lat (msec) : 2=0.16%, 4=0.38%, 10=20.31%, 20=78.36%, 50=0.78% 00:33:01.145 cpu : usr=4.97%, sys=6.45%, ctx=391, majf=0, minf=1 00:33:01.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:33:01.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:01.145 issued rwts: total=5504,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:01.145 00:33:01.145 Run status group 0 (all jobs): 00:33:01.145 READ: bw=62.3MiB/s (65.3MB/s), 4607KiB/s-23.0MiB/s (4717kB/s-24.1MB/s), io=65.3MiB (68.4MB), run=1001-1048msec 00:33:01.145 WRITE: bw=66.3MiB/s (69.5MB/s), 5863KiB/s-24.0MiB/s (6003kB/s-25.1MB/s), io=69.4MiB (72.8MB), run=1001-1048msec 00:33:01.145 00:33:01.145 Disk stats (read/write): 00:33:01.145 nvme0n1: ios=1057/1226, merge=0/0, ticks=12463/21816, in_queue=34279, util=97.39% 00:33:01.145 nvme0n2: ios=4642/5103, merge=0/0, ticks=16004/18932, in_queue=34936, util=97.94% 00:33:01.145 nvme0n3: ios=3189/3584, merge=0/0, ticks=48902/50541, in_queue=99443, util=87.66% 00:33:01.145 nvme0n4: ios=4292/4608, merge=0/0, ticks=49978/47282, in_queue=97260, util=97.47% 00:33:01.145 18:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:01.145 [global] 00:33:01.145 thread=1 00:33:01.145 invalidate=1 00:33:01.145 rw=randwrite 00:33:01.145 time_based=1 00:33:01.145 runtime=1 00:33:01.145 ioengine=libaio 00:33:01.145 direct=1 00:33:01.145 bs=4096 00:33:01.145 iodepth=128 00:33:01.145 norandommap=0 00:33:01.145 numjobs=1 00:33:01.145 00:33:01.145 verify_dump=1 00:33:01.145 verify_backlog=512 00:33:01.145 verify_state_save=0 00:33:01.145 do_verify=1 00:33:01.145 verify=crc32c-intel 00:33:01.145 [job0] 00:33:01.145 filename=/dev/nvme0n1 00:33:01.145 [job1] 00:33:01.145 filename=/dev/nvme0n2 00:33:01.145 [job2] 00:33:01.145 filename=/dev/nvme0n3 00:33:01.145 [job3] 00:33:01.145 filename=/dev/nvme0n4 00:33:01.145 Could not set queue depth (nvme0n1) 00:33:01.145 Could not set queue depth (nvme0n2) 00:33:01.145 Could not set queue depth (nvme0n3) 00:33:01.145 Could not set queue depth (nvme0n4) 00:33:01.404 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:01.404 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:01.404 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:01.404 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:01.404 fio-3.35 00:33:01.404 Starting 4 threads 00:33:02.780 00:33:02.780 job0: (groupid=0, jobs=1): err= 0: pid=655170: Tue Oct 8 18:40:55 2024 00:33:02.780 read: IOPS=6332, BW=24.7MiB/s (25.9MB/s)(24.8MiB/1003msec) 00:33:02.780 slat (nsec): min=1295, max=14048k, avg=65371.39, stdev=576798.65 00:33:02.780 clat (usec): min=457, max=37613, avg=10190.21, stdev=4359.47 00:33:02.780 lat (usec): min=677, max=37620, avg=10255.58, stdev=4392.38 00:33:02.780 clat percentiles (usec): 00:33:02.780 | 1.00th=[ 1418], 5.00th=[ 4948], 10.00th=[ 6128], 20.00th=[ 7701], 00:33:02.781 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9896], 00:33:02.781 | 70.00th=[10814], 80.00th=[12780], 90.00th=[15008], 95.00th=[16450], 00:33:02.781 | 99.00th=[30016], 99.50th=[35390], 99.90th=[37487], 99.95th=[37487], 00:33:02.781 | 99.99th=[37487] 00:33:02.781 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:33:02.781 slat (nsec): min=1968, max=10966k, avg=59816.81, stdev=492219.77 00:33:02.781 clat (usec): min=366, max=50281, avg=9369.62, stdev=4349.92 00:33:02.781 lat (usec): min=376, max=50285, avg=9429.43, stdev=4373.69 00:33:02.781 clat percentiles (usec): 00:33:02.781 | 1.00th=[ 2573], 5.00th=[ 4015], 10.00th=[ 5735], 20.00th=[ 6783], 00:33:02.781 | 30.00th=[ 7701], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9503], 00:33:02.781 | 70.00th=[10159], 80.00th=[10683], 90.00th=[13435], 95.00th=[15664], 00:33:02.781 | 99.00th=[23725], 99.50th=[37487], 99.90th=[48497], 99.95th=[50070], 00:33:02.781 | 99.99th=[50070] 00:33:02.781 bw ( KiB/s): min=24576, max=28672, per=38.54%, avg=26624.00, stdev=2896.31, samples=2 00:33:02.781 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:33:02.781 lat (usec) : 500=0.06%, 750=0.10%, 1000=0.15% 00:33:02.781 lat (msec) : 2=0.68%, 4=3.41%, 10=59.81%, 20=33.94%, 50=1.80% 00:33:02.781 lat (msec) : 100=0.05% 00:33:02.781 cpu : usr=4.49%, sys=8.78%, ctx=411, majf=0, minf=1 00:33:02.781 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:33:02.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:02.781 issued rwts: total=6351,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:02.781 job1: (groupid=0, jobs=1): err= 0: pid=655171: Tue Oct 8 18:40:55 2024 00:33:02.781 read: IOPS=4979, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1002msec) 00:33:02.781 slat (nsec): min=1338, max=13360k, avg=91010.40, stdev=628395.22 00:33:02.781 clat (usec): min=1394, max=34403, avg=11922.06, stdev=4979.25 00:33:02.781 lat (usec): min=1397, max=35321, avg=12013.07, stdev=5018.42 00:33:02.781 clat percentiles (usec): 00:33:02.781 | 1.00th=[ 5145], 5.00th=[ 7439], 10.00th=[ 8160], 20.00th=[ 8979], 00:33:02.781 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10290], 60.00th=[10683], 00:33:02.781 | 70.00th=[11338], 80.00th=[12649], 90.00th=[20579], 95.00th=[24249], 00:33:02.781 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31065], 99.95th=[33817], 00:33:02.781 | 99.99th=[34341] 00:33:02.781 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:33:02.781 slat (usec): min=2, max=43219, avg=100.33, stdev=977.25 00:33:02.781 clat (usec): min=4380, max=53671, avg=13188.86, stdev=9199.13 00:33:02.781 lat (usec): min=4387, max=53675, avg=13289.19, stdev=9238.49 00:33:02.781 clat percentiles (usec): 00:33:02.781 | 1.00th=[ 6259], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9503], 00:33:02.781 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:33:02.781 | 70.00th=[10552], 80.00th=[12256], 90.00th=[22152], 95.00th=[39584], 00:33:02.781 | 99.00th=[53740], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:33:02.781 | 99.99th=[53740] 00:33:02.781 bw ( KiB/s): min=20480, max=20480, per=29.64%, avg=20480.00, stdev= 0.00, samples=2 00:33:02.781 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:33:02.781 lat (msec) : 2=0.08%, 10=44.06%, 20=44.87%, 50=9.89%, 100=1.10% 00:33:02.781 cpu : usr=4.00%, sys=6.69%, ctx=414, majf=0, minf=1 00:33:02.781 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:33:02.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:02.781 issued rwts: total=4989,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:02.781 job2: (groupid=0, jobs=1): err= 0: pid=655172: Tue Oct 8 18:40:55 2024 00:33:02.781 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.5MiB/1050msec) 00:33:02.781 slat (nsec): min=1296, max=15401k, avg=149948.16, stdev=1034271.72 00:33:02.781 clat (msec): min=4, max=101, avg=18.86, stdev=13.06 00:33:02.781 lat (msec): min=4, max=101, avg=19.01, stdev=13.14 00:33:02.781 clat percentiles (msec): 00:33:02.781 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 13], 00:33:02.781 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 18], 00:33:02.781 | 70.00th=[ 19], 80.00th=[ 21], 90.00th=[ 26], 95.00th=[ 43], 00:33:02.781 | 99.00th=[ 87], 99.50th=[ 95], 99.90th=[ 103], 99.95th=[ 103], 00:33:02.781 | 99.99th=[ 103] 00:33:02.781 write: IOPS=3413, BW=13.3MiB/s (14.0MB/s)(14.0MiB/1050msec); 0 zone resets 00:33:02.781 slat (usec): min=2, max=15815, avg=140.04, stdev=868.59 00:33:02.781 clat (usec): min=1404, max=105731, avg=20367.34, stdev=15607.48 00:33:02.781 lat (usec): min=1421, max=105742, avg=20507.38, stdev=15712.32 00:33:02.781 clat percentiles (msec): 00:33:02.781 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:33:02.781 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 16], 00:33:02.781 | 70.00th=[ 19], 80.00th=[ 25], 90.00th=[ 51], 95.00th=[ 56], 00:33:02.781 | 99.00th=[ 62], 99.50th=[ 62], 99.90th=[ 106], 99.95th=[ 106], 00:33:02.781 | 99.99th=[ 106] 00:33:02.781 bw ( KiB/s): min=14256, max=14336, per=20.69%, avg=14296.00, stdev=56.57, samples=2 00:33:02.781 iops : min= 3564, max= 3584, avg=3574.00, stdev=14.14, samples=2 00:33:02.781 lat (msec) : 2=0.03%, 10=13.79%, 20=62.96%, 50=16.06%, 100=6.95% 00:33:02.781 lat (msec) : 250=0.21% 00:33:02.781 cpu : usr=2.57%, sys=4.39%, ctx=297, majf=0, minf=1 00:33:02.781 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:33:02.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:02.781 issued rwts: total=3190,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:02.781 job3: (groupid=0, jobs=1): err= 0: pid=655173: Tue Oct 8 18:40:55 2024 00:33:02.781 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:33:02.781 slat (nsec): min=1348, max=39167k, avg=204859.88, stdev=1757931.26 00:33:02.781 clat (usec): min=11010, max=75174, avg=25047.31, stdev=15254.31 00:33:02.781 lat (usec): min=11016, max=75207, avg=25252.17, stdev=15418.47 00:33:02.781 clat percentiles (usec): 00:33:02.781 | 1.00th=[11469], 5.00th=[11731], 10.00th=[12125], 20.00th=[13960], 00:33:02.781 | 30.00th=[14877], 40.00th=[15664], 50.00th=[16319], 60.00th=[20055], 00:33:02.781 | 70.00th=[32637], 80.00th=[39060], 90.00th=[48497], 95.00th=[57410], 00:33:02.781 | 99.00th=[71828], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:33:02.781 | 99.99th=[74974] 00:33:02.781 write: IOPS=2753, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1008msec); 0 zone resets 00:33:02.781 slat (usec): min=2, max=29269, avg=164.92, stdev=1337.50 00:33:02.781 clat (usec): min=5312, max=75509, avg=22293.14, stdev=14033.23 00:33:02.781 lat (usec): min=6395, max=75519, avg=22458.06, stdev=14137.55 00:33:02.781 clat percentiles (usec): 00:33:02.781 | 1.00th=[ 6456], 5.00th=[11338], 10.00th=[12518], 20.00th=[13829], 00:33:02.781 | 30.00th=[14222], 40.00th=[14484], 50.00th=[16188], 60.00th=[19530], 00:33:02.781 | 70.00th=[21365], 80.00th=[31589], 90.00th=[36439], 95.00th=[57410], 00:33:02.781 | 99.00th=[74974], 99.50th=[74974], 99.90th=[74974], 99.95th=[76022], 00:33:02.781 | 99.99th=[76022] 00:33:02.781 bw ( KiB/s): min= 8240, max=12944, per=15.33%, avg=10592.00, stdev=3326.23, samples=2 00:33:02.781 iops : min= 2060, max= 3236, avg=2648.00, stdev=831.56, samples=2 00:33:02.781 lat (msec) : 10=1.78%, 20=58.68%, 50=31.15%, 100=8.40% 00:33:02.781 cpu : usr=2.58%, sys=3.97%, ctx=136, majf=0, minf=1 00:33:02.781 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:33:02.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:02.781 issued rwts: total=2560,2776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:02.781 00:33:02.781 Run status group 0 (all jobs): 00:33:02.781 READ: bw=63.6MiB/s (66.7MB/s), 9.92MiB/s-24.7MiB/s (10.4MB/s-25.9MB/s), io=66.8MiB (70.0MB), run=1002-1050msec 00:33:02.781 WRITE: bw=67.5MiB/s (70.7MB/s), 10.8MiB/s-25.9MiB/s (11.3MB/s-27.2MB/s), io=70.8MiB (74.3MB), run=1002-1050msec 00:33:02.781 00:33:02.781 Disk stats (read/write): 00:33:02.781 nvme0n1: ios=5461/5632, merge=0/0, ticks=55156/47481, in_queue=102637, util=97.90% 00:33:02.781 nvme0n2: ios=4131/4175, merge=0/0, ticks=25633/24123, in_queue=49756, util=96.45% 00:33:02.781 nvme0n3: ios=3050/3079, merge=0/0, ticks=49659/55597, in_queue=105256, util=88.97% 00:33:02.781 nvme0n4: ios=2008/2048, merge=0/0, ticks=27658/23804, in_queue=51462, util=89.30% 00:33:02.781 18:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:33:02.781 18:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=655403 00:33:02.781 18:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:33:02.781 18:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:33:02.781 [global] 00:33:02.781 thread=1 00:33:02.781 invalidate=1 00:33:02.781 rw=read 00:33:02.781 time_based=1 00:33:02.781 runtime=10 00:33:02.781 ioengine=libaio 00:33:02.781 direct=1 00:33:02.781 bs=4096 00:33:02.781 iodepth=1 00:33:02.781 norandommap=1 00:33:02.781 numjobs=1 00:33:02.781 00:33:02.781 [job0] 00:33:02.781 filename=/dev/nvme0n1 00:33:02.781 [job1] 00:33:02.781 filename=/dev/nvme0n2 00:33:02.781 [job2] 00:33:02.781 filename=/dev/nvme0n3 00:33:02.781 [job3] 00:33:02.781 filename=/dev/nvme0n4 00:33:02.781 Could not set queue depth (nvme0n1) 00:33:02.781 Could not set queue depth (nvme0n2) 00:33:02.781 Could not set queue depth (nvme0n3) 00:33:02.781 Could not set queue depth (nvme0n4) 00:33:03.040 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:03.040 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:03.040 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:03.040 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:03.040 fio-3.35 00:33:03.040 Starting 4 threads 00:33:06.326 18:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:33:06.326 18:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:33:06.326 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=282624, buflen=4096 00:33:06.326 fio: pid=655587, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:06.326 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=323584, buflen=4096 00:33:06.326 fio: pid=655582, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:06.326 18:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:06.326 18:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:33:06.326 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=23527424, buflen=4096 00:33:06.326 fio: pid=655561, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:06.326 18:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:06.326 18:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:33:06.585 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1433600, buflen=4096 00:33:06.585 fio: pid=655567, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:06.585 18:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:06.585 18:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:33:06.585 00:33:06.585 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=655561: Tue Oct 8 18:40:59 2024 00:33:06.585 read: IOPS=1837, BW=7350KiB/s (7526kB/s)(22.4MiB/3126msec) 00:33:06.585 slat (usec): min=6, max=9677, avg=10.33, stdev=127.58 00:33:06.585 clat (usec): min=166, max=41912, avg=528.11, stdev=3637.76 00:33:06.585 lat (usec): min=180, max=50850, avg=538.43, stdev=3659.88 00:33:06.585 clat percentiles (usec): 00:33:06.585 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 190], 00:33:06.585 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 204], 00:33:06.585 | 70.00th=[ 206], 80.00th=[ 208], 90.00th=[ 215], 95.00th=[ 223], 00:33:06.585 | 99.00th=[ 392], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:06.585 | 99.99th=[41681] 00:33:06.585 bw ( KiB/s): min= 96, max=18920, per=100.00%, avg=7655.17, stdev=8561.30, samples=6 00:33:06.585 iops : min= 24, max= 4730, avg=1913.67, stdev=2140.46, samples=6 00:33:06.585 lat (usec) : 250=98.14%, 500=1.01%, 750=0.02% 00:33:06.585 lat (msec) : 2=0.02%, 50=0.80% 00:33:06.585 cpu : usr=0.99%, sys=3.07%, ctx=5748, majf=0, minf=1 00:33:06.585 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:06.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.585 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.585 issued rwts: total=5745,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:06.585 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:06.585 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=655567: Tue Oct 8 18:40:59 2024 00:33:06.585 read: IOPS=105, BW=421KiB/s (431kB/s)(1400KiB/3328msec) 00:33:06.585 slat (usec): min=6, max=2910, avg=19.66, stdev=154.94 00:33:06.585 clat (usec): min=169, max=45107, avg=9415.61, stdev=17094.40 00:33:06.585 lat (usec): min=176, max=45126, avg=9435.26, stdev=17116.84 00:33:06.585 clat percentiles (usec): 00:33:06.585 | 1.00th=[ 172], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 178], 00:33:06.585 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 206], 60.00th=[ 221], 00:33:06.585 | 70.00th=[ 245], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:33:06.585 | 99.00th=[41157], 99.50th=[41681], 99.90th=[45351], 99.95th=[45351], 00:33:06.585 | 99.99th=[45351] 00:33:06.585 bw ( KiB/s): min= 96, max= 2232, per=6.08%, avg=456.33, stdev=869.91, samples=6 00:33:06.585 iops : min= 24, max= 558, avg=114.00, stdev=217.52, samples=6 00:33:06.585 lat (usec) : 250=72.08%, 500=5.13% 00:33:06.585 lat (msec) : 50=22.51% 00:33:06.585 cpu : usr=0.00%, sys=0.30%, ctx=356, majf=0, minf=2 00:33:06.585 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:06.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.585 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.585 issued rwts: total=351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:06.585 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:06.586 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=655582: Tue Oct 8 18:40:59 2024 00:33:06.586 read: IOPS=27, BW=109KiB/s (112kB/s)(316KiB/2901msec) 00:33:06.586 slat (usec): min=9, max=4735, avg=81.06, stdev=527.04 00:33:06.586 clat (usec): min=217, max=42071, avg=36365.95, stdev=12978.66 00:33:06.586 lat (usec): min=241, max=45868, avg=36447.77, stdev=13010.75 00:33:06.586 clat percentiles (usec): 00:33:06.586 | 1.00th=[ 219], 5.00th=[ 243], 10.00th=[ 367], 20.00th=[40633], 00:33:06.586 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:06.586 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:06.586 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:06.586 | 99.99th=[42206] 00:33:06.586 bw ( KiB/s): min= 96, max= 120, per=1.44%, avg=108.80, stdev=12.13, samples=5 00:33:06.586 iops : min= 24, max= 30, avg=27.20, stdev= 3.03, samples=5 00:33:06.586 lat (usec) : 250=6.25%, 500=3.75% 00:33:06.586 lat (msec) : 2=1.25%, 50=87.50% 00:33:06.586 cpu : usr=0.14%, sys=0.00%, ctx=81, majf=0, minf=2 00:33:06.586 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:06.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.586 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.586 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:06.586 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:06.586 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=655587: Tue Oct 8 18:40:59 2024 00:33:06.586 read: IOPS=25, BW=101KiB/s (103kB/s)(276KiB/2733msec) 00:33:06.586 slat (nsec): min=7673, max=33754, avg=23085.14, stdev=3310.28 00:33:06.586 clat (usec): min=258, max=41962, avg=39235.63, stdev=8357.60 00:33:06.586 lat (usec): min=287, max=41986, avg=39258.69, stdev=8357.77 00:33:06.586 clat percentiles (usec): 00:33:06.586 | 1.00th=[ 260], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:33:06.586 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:06.586 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:06.586 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:06.586 | 99.99th=[42206] 00:33:06.586 bw ( KiB/s): min= 96, max= 112, per=1.33%, avg=100.80, stdev= 7.16, samples=5 00:33:06.586 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:33:06.586 lat (usec) : 500=4.29% 00:33:06.586 lat (msec) : 50=94.29% 00:33:06.586 cpu : usr=0.07%, sys=0.00%, ctx=72, majf=0, minf=2 00:33:06.586 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:06.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.586 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.586 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:06.586 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:06.586 00:33:06.586 Run status group 0 (all jobs): 00:33:06.586 READ: bw=7502KiB/s (7682kB/s), 101KiB/s-7350KiB/s (103kB/s-7526kB/s), io=24.4MiB (25.6MB), run=2733-3328msec 00:33:06.586 00:33:06.586 Disk stats (read/write): 00:33:06.586 nvme0n1: ios=5744/0, merge=0/0, ticks=2917/0, in_queue=2917, util=95.44% 00:33:06.586 nvme0n2: ios=381/0, merge=0/0, ticks=4036/0, in_queue=4036, util=100.00% 00:33:06.586 nvme0n3: ios=78/0, merge=0/0, ticks=2833/0, in_queue=2833, util=96.42% 00:33:06.586 nvme0n4: ios=111/0, merge=0/0, ticks=3539/0, in_queue=3539, util=98.85% 00:33:06.844 18:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:06.844 18:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:33:06.844 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:06.844 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:33:07.102 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:07.102 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:33:07.360 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:07.361 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:33:07.618 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:33:07.618 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 655403 00:33:07.618 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:33:07.618 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:07.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:07.618 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:07.618 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:33:07.618 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:07.618 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:07.618 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:07.618 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:07.618 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:33:07.618 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:33:07.618 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:33:07.618 nvmf hotplug test: fio failed as expected 00:33:07.618 18:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:07.876 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:33:07.876 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:33:07.876 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:33:07.876 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:33:07.876 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:33:07.876 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:07.876 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:33:07.876 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:07.877 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:33:07.877 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:07.877 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:07.877 rmmod nvme_tcp 00:33:07.877 rmmod nvme_fabrics 00:33:07.877 rmmod nvme_keyring 00:33:07.877 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 652709 ']' 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 652709 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 652709 ']' 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 652709 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 652709 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 652709' 00:33:08.135 killing process with pid 652709 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 652709 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 652709 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:33:08.135 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:08.393 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:08.393 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:08.393 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.393 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:08.393 18:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.296 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:10.296 00:33:10.296 real 0m26.585s 00:33:10.296 user 1m31.757s 00:33:10.296 sys 0m11.126s 00:33:10.296 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:10.296 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:10.296 ************************************ 00:33:10.296 END TEST nvmf_fio_target 00:33:10.296 ************************************ 00:33:10.296 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:10.296 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:10.296 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:10.296 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:10.296 ************************************ 00:33:10.296 START TEST nvmf_bdevio 00:33:10.296 ************************************ 00:33:10.296 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:10.555 * Looking for test storage... 00:33:10.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:10.555 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:10.555 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:33:10.555 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:10.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.556 --rc genhtml_branch_coverage=1 00:33:10.556 --rc genhtml_function_coverage=1 00:33:10.556 --rc genhtml_legend=1 00:33:10.556 --rc geninfo_all_blocks=1 00:33:10.556 --rc geninfo_unexecuted_blocks=1 00:33:10.556 00:33:10.556 ' 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:10.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.556 --rc genhtml_branch_coverage=1 00:33:10.556 --rc genhtml_function_coverage=1 00:33:10.556 --rc genhtml_legend=1 00:33:10.556 --rc geninfo_all_blocks=1 00:33:10.556 --rc geninfo_unexecuted_blocks=1 00:33:10.556 00:33:10.556 ' 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:10.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.556 --rc genhtml_branch_coverage=1 00:33:10.556 --rc genhtml_function_coverage=1 00:33:10.556 --rc genhtml_legend=1 00:33:10.556 --rc geninfo_all_blocks=1 00:33:10.556 --rc geninfo_unexecuted_blocks=1 00:33:10.556 00:33:10.556 ' 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:10.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.556 --rc genhtml_branch_coverage=1 00:33:10.556 --rc genhtml_function_coverage=1 00:33:10.556 --rc genhtml_legend=1 00:33:10.556 --rc geninfo_all_blocks=1 00:33:10.556 --rc geninfo_unexecuted_blocks=1 00:33:10.556 00:33:10.556 ' 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:33:10.556 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:10.557 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:10.557 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:10.557 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:10.557 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:10.557 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.557 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.557 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.557 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:10.557 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:10.557 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:33:10.557 18:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:17.126 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:17.126 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:17.126 Found net devices under 0000:86:00.0: cvl_0_0 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:17.126 Found net devices under 0000:86:00.1: cvl_0_1 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:17.126 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:17.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:17.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:33:17.127 00:33:17.127 --- 10.0.0.2 ping statistics --- 00:33:17.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.127 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:17.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:17.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:33:17.127 00:33:17.127 --- 10.0.0.1 ping statistics --- 00:33:17.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.127 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=659951 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 659951 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 659951 ']' 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:17.127 18:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:17.127 [2024-10-08 18:41:09.807243] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:17.127 [2024-10-08 18:41:09.808183] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:33:17.127 [2024-10-08 18:41:09.808218] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:17.127 [2024-10-08 18:41:09.880651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:17.127 [2024-10-08 18:41:09.952875] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:17.127 [2024-10-08 18:41:09.952920] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:17.127 [2024-10-08 18:41:09.952927] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:17.127 [2024-10-08 18:41:09.952933] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:17.127 [2024-10-08 18:41:09.952938] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:17.127 [2024-10-08 18:41:09.954433] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:33:17.127 [2024-10-08 18:41:09.954617] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:33:17.127 [2024-10-08 18:41:09.954726] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:33:17.127 [2024-10-08 18:41:09.954726] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:33:17.127 [2024-10-08 18:41:10.033684] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:17.127 [2024-10-08 18:41:10.034477] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:17.127 [2024-10-08 18:41:10.034577] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:17.127 [2024-10-08 18:41:10.034622] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:17.127 [2024-10-08 18:41:10.034723] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:17.387 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:17.387 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:33:17.387 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:17.387 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:17.387 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:17.387 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:17.387 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:17.387 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.387 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:17.387 [2024-10-08 18:41:10.683508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.387 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.387 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:17.387 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.387 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:17.646 Malloc0 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:17.646 [2024-10-08 18:41:10.751579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:17.646 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:17.646 { 00:33:17.646 "params": { 00:33:17.646 "name": "Nvme$subsystem", 00:33:17.646 "trtype": "$TEST_TRANSPORT", 00:33:17.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:17.646 "adrfam": "ipv4", 00:33:17.646 "trsvcid": "$NVMF_PORT", 00:33:17.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:17.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:17.647 "hdgst": ${hdgst:-false}, 00:33:17.647 "ddgst": ${ddgst:-false} 00:33:17.647 }, 00:33:17.647 "method": "bdev_nvme_attach_controller" 00:33:17.647 } 00:33:17.647 EOF 00:33:17.647 )") 00:33:17.647 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:33:17.647 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:33:17.647 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:33:17.647 18:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:17.647 "params": { 00:33:17.647 "name": "Nvme1", 00:33:17.647 "trtype": "tcp", 00:33:17.647 "traddr": "10.0.0.2", 00:33:17.647 "adrfam": "ipv4", 00:33:17.647 "trsvcid": "4420", 00:33:17.647 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:17.647 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:17.647 "hdgst": false, 00:33:17.647 "ddgst": false 00:33:17.647 }, 00:33:17.647 "method": "bdev_nvme_attach_controller" 00:33:17.647 }' 00:33:17.647 [2024-10-08 18:41:10.802294] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:33:17.647 [2024-10-08 18:41:10.802342] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660031 ] 00:33:17.647 [2024-10-08 18:41:10.869104] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:17.647 [2024-10-08 18:41:10.942969] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.647 [2024-10-08 18:41:10.943078] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.647 [2024-10-08 18:41:10.943079] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:17.906 I/O targets: 00:33:17.906 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:33:17.906 00:33:17.906 00:33:17.906 CUnit - A unit testing framework for C - Version 2.1-3 00:33:17.906 http://cunit.sourceforge.net/ 00:33:17.906 00:33:17.906 00:33:17.906 Suite: bdevio tests on: Nvme1n1 00:33:18.165 Test: blockdev write read block ...passed 00:33:18.165 Test: blockdev write zeroes read block ...passed 00:33:18.165 Test: blockdev write zeroes read no split ...passed 00:33:18.165 Test: blockdev write zeroes read split ...passed 00:33:18.165 Test: blockdev write zeroes read split partial ...passed 00:33:18.165 Test: blockdev reset ...[2024-10-08 18:41:11.363580] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:18.165 [2024-10-08 18:41:11.363645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65b400 (9): Bad file descriptor 00:33:18.165 [2024-10-08 18:41:11.457473] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:18.165 passed 00:33:18.165 Test: blockdev write read 8 blocks ...passed 00:33:18.165 Test: blockdev write read size > 128k ...passed 00:33:18.165 Test: blockdev write read invalid size ...passed 00:33:18.423 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:18.423 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:18.423 Test: blockdev write read max offset ...passed 00:33:18.423 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:18.423 Test: blockdev writev readv 8 blocks ...passed 00:33:18.423 Test: blockdev writev readv 30 x 1block ...passed 00:33:18.423 Test: blockdev writev readv block ...passed 00:33:18.423 Test: blockdev writev readv size > 128k ...passed 00:33:18.423 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:18.423 Test: blockdev comparev and writev ...[2024-10-08 18:41:11.670186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:18.423 [2024-10-08 18:41:11.670215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.423 [2024-10-08 18:41:11.670229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:18.423 [2024-10-08 18:41:11.670237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:18.423 [2024-10-08 18:41:11.670535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:18.423 [2024-10-08 18:41:11.670554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:18.423 [2024-10-08 18:41:11.670572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:18.423 [2024-10-08 18:41:11.670583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:18.423 [2024-10-08 18:41:11.670885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:18.423 [2024-10-08 18:41:11.670900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:18.423 [2024-10-08 18:41:11.670913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:18.423 [2024-10-08 18:41:11.670922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:18.423 [2024-10-08 18:41:11.671208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:18.423 [2024-10-08 18:41:11.671219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:18.423 [2024-10-08 18:41:11.671231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:18.423 [2024-10-08 18:41:11.671239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:18.423 passed 00:33:18.682 Test: blockdev nvme passthru rw ...passed 00:33:18.682 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:41:11.753720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:18.682 [2024-10-08 18:41:11.753738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:18.682 [2024-10-08 18:41:11.753851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:18.682 [2024-10-08 18:41:11.753861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:18.682 [2024-10-08 18:41:11.753964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:18.682 [2024-10-08 18:41:11.753974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:18.682 [2024-10-08 18:41:11.754077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:18.682 [2024-10-08 18:41:11.754087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:18.682 passed 00:33:18.682 Test: blockdev nvme admin passthru ...passed 00:33:18.682 Test: blockdev copy ...passed 00:33:18.682 00:33:18.682 Run Summary: Type Total Ran Passed Failed Inactive 00:33:18.682 suites 1 1 n/a 0 0 00:33:18.682 tests 23 23 23 0 0 00:33:18.682 asserts 152 152 152 0 n/a 00:33:18.682 00:33:18.682 Elapsed time = 1.189 seconds 00:33:18.683 18:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:18.683 18:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.683 18:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:18.683 18:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.683 18:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:33:18.683 18:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:33:18.683 18:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:18.683 18:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:33:18.683 18:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:18.683 18:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:33:18.683 18:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:18.683 18:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:18.683 rmmod nvme_tcp 00:33:18.942 rmmod nvme_fabrics 00:33:18.942 rmmod nvme_keyring 00:33:18.942 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:18.942 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:33:18.942 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:33:18.942 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 659951 ']' 00:33:18.942 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 659951 00:33:18.942 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 659951 ']' 00:33:18.942 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 659951 00:33:18.942 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:33:18.942 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:18.942 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 659951 00:33:18.942 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:33:18.942 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:33:18.942 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 659951' 00:33:18.942 killing process with pid 659951 00:33:18.942 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 659951 00:33:18.942 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 659951 00:33:19.202 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:19.202 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:19.202 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:19.202 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:33:19.202 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:19.202 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:33:19.202 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:33:19.202 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:19.202 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:19.202 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.202 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:19.202 18:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.109 18:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:21.109 00:33:21.109 real 0m10.783s 00:33:21.109 user 0m9.799s 00:33:21.109 sys 0m5.343s 00:33:21.109 18:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:21.109 18:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:21.109 ************************************ 00:33:21.109 END TEST nvmf_bdevio 00:33:21.109 ************************************ 00:33:21.109 18:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:21.109 00:33:21.109 real 4m41.946s 00:33:21.109 user 9m17.802s 00:33:21.109 sys 1m53.959s 00:33:21.109 18:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:21.109 18:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:21.109 ************************************ 00:33:21.109 END TEST nvmf_target_core_interrupt_mode 00:33:21.109 ************************************ 00:33:21.369 18:41:14 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:21.369 18:41:14 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:21.369 18:41:14 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:21.369 18:41:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:21.369 ************************************ 00:33:21.369 START TEST nvmf_interrupt 00:33:21.369 ************************************ 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:21.369 * Looking for test storage... 00:33:21.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:21.369 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:21.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.370 --rc genhtml_branch_coverage=1 00:33:21.370 --rc genhtml_function_coverage=1 00:33:21.370 --rc genhtml_legend=1 00:33:21.370 --rc geninfo_all_blocks=1 00:33:21.370 --rc geninfo_unexecuted_blocks=1 00:33:21.370 00:33:21.370 ' 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:21.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.370 --rc genhtml_branch_coverage=1 00:33:21.370 --rc genhtml_function_coverage=1 00:33:21.370 --rc genhtml_legend=1 00:33:21.370 --rc geninfo_all_blocks=1 00:33:21.370 --rc geninfo_unexecuted_blocks=1 00:33:21.370 00:33:21.370 ' 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:21.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.370 --rc genhtml_branch_coverage=1 00:33:21.370 --rc genhtml_function_coverage=1 00:33:21.370 --rc genhtml_legend=1 00:33:21.370 --rc geninfo_all_blocks=1 00:33:21.370 --rc geninfo_unexecuted_blocks=1 00:33:21.370 00:33:21.370 ' 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:21.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.370 --rc genhtml_branch_coverage=1 00:33:21.370 --rc genhtml_function_coverage=1 00:33:21.370 --rc genhtml_legend=1 00:33:21.370 --rc geninfo_all_blocks=1 00:33:21.370 --rc geninfo_unexecuted_blocks=1 00:33:21.370 00:33:21.370 ' 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:21.370 18:41:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:27.945 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:27.945 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:27.945 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:27.945 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:27.945 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:27.945 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:27.945 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:27.945 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:27.946 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:27.946 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:27.946 Found net devices under 0000:86:00.0: cvl_0_0 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:27.946 Found net devices under 0000:86:00.1: cvl_0_1 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:27.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:27.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:33:27.946 00:33:27.946 --- 10.0.0.2 ping statistics --- 00:33:27.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.946 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:27.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:27.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:33:27.946 00:33:27.946 --- 10.0.0.1 ping statistics --- 00:33:27.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.946 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:27.946 18:41:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:27.947 18:41:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:27.947 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=663803 00:33:27.947 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 663803 00:33:27.947 18:41:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:27.947 18:41:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 663803 ']' 00:33:27.947 18:41:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.947 18:41:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:27.947 18:41:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.947 18:41:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:27.947 18:41:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:27.947 [2024-10-08 18:41:20.641423] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:27.947 [2024-10-08 18:41:20.642349] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:33:27.947 [2024-10-08 18:41:20.642389] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:27.947 [2024-10-08 18:41:20.713370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:27.947 [2024-10-08 18:41:20.783121] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:27.947 [2024-10-08 18:41:20.783162] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:27.947 [2024-10-08 18:41:20.783169] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:27.947 [2024-10-08 18:41:20.783174] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:27.947 [2024-10-08 18:41:20.783179] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:27.947 [2024-10-08 18:41:20.783995] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.947 [2024-10-08 18:41:20.783996] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:27.947 [2024-10-08 18:41:20.850760] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:27.947 [2024-10-08 18:41:20.851321] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:27.947 [2024-10-08 18:41:20.851584] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:28.207 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:28.207 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:33:28.207 18:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:28.207 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:28.207 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:28.207 18:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:28.207 18:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:28.207 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:28.207 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:28.207 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:28.466 5000+0 records in 00:33:28.466 5000+0 records out 00:33:28.466 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0167673 s, 611 MB/s 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:28.466 AIO0 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:28.466 [2024-10-08 18:41:21.576876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:28.466 [2024-10-08 18:41:21.629119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 663803 0 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 663803 0 idle 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=663803 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 663803 -w 256 00:33:28.466 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 663803 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.28 reactor_0' 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 663803 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.28 reactor_0 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 663803 1 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 663803 1 idle 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=663803 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 663803 -w 256 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 663807 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 663807 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:28.725 18:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=664066 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 663803 0 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 663803 0 busy 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=663803 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 663803 -w 256 00:33:28.725 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 663803 root 20 0 128.2g 46848 33792 R 33.3 0.0 0:00.33 reactor_0' 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 663803 root 20 0 128.2g 46848 33792 R 33.3 0.0 0:00.33 reactor_0 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=33.3 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=33 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 663803 1 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 663803 1 busy 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=663803 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 663803 -w 256 00:33:28.984 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:29.243 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 663807 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.21 reactor_1' 00:33:29.243 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 663807 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.21 reactor_1 00:33:29.243 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:29.243 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:29.243 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:29.243 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:29.243 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:29.243 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:29.243 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:29.243 18:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:29.243 18:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 664066 00:33:39.224 Initializing NVMe Controllers 00:33:39.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:39.224 Controller IO queue size 256, less than required. 00:33:39.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:39.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:39.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:39.224 Initialization complete. Launching workers. 00:33:39.224 ======================================================== 00:33:39.224 Latency(us) 00:33:39.224 Device Information : IOPS MiB/s Average min max 00:33:39.224 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16268.29 63.55 15744.46 3448.92 56596.41 00:33:39.224 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16482.19 64.38 15537.37 7814.55 55253.18 00:33:39.224 ======================================================== 00:33:39.224 Total : 32750.48 127.93 15640.24 3448.92 56596.41 00:33:39.224 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 663803 0 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 663803 0 idle 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=663803 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 663803 -w 256 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 663803 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.26 reactor_0' 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 663803 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.26 reactor_0 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 663803 1 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 663803 1 idle 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=663803 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 663803 -w 256 00:33:39.224 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:39.483 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 663807 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:33:39.483 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 663807 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:33:39.483 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:39.483 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:39.483 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:39.483 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:39.483 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:39.483 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:39.483 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:39.483 18:41:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:39.483 18:41:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:39.742 18:41:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:39.742 18:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:33:39.742 18:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:39.742 18:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:33:39.742 18:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 663803 0 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 663803 0 idle 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=663803 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 663803 -w 256 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 663803 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:20.53 reactor_0' 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 663803 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:20.53 reactor_0 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 663803 1 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 663803 1 idle 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=663803 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 663803 -w 256 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 663807 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:10.09 reactor_1' 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 663807 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:10.09 reactor_1 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:42.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:42.277 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:42.277 rmmod nvme_tcp 00:33:42.277 rmmod nvme_fabrics 00:33:42.277 rmmod nvme_keyring 00:33:42.536 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:42.536 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:42.536 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:42.536 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 663803 ']' 00:33:42.536 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 663803 00:33:42.536 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 663803 ']' 00:33:42.536 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 663803 00:33:42.536 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:33:42.536 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:42.536 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 663803 00:33:42.536 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:42.536 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:42.536 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 663803' 00:33:42.536 killing process with pid 663803 00:33:42.536 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 663803 00:33:42.536 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 663803 00:33:42.795 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:42.795 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:42.795 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:42.795 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:42.795 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:33:42.795 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:42.795 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:33:42.795 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:42.795 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:42.795 18:41:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.795 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:42.795 18:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.698 18:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:44.698 00:33:44.698 real 0m23.497s 00:33:44.698 user 0m39.957s 00:33:44.698 sys 0m8.417s 00:33:44.698 18:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:44.698 18:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:44.698 ************************************ 00:33:44.698 END TEST nvmf_interrupt 00:33:44.698 ************************************ 00:33:44.698 00:33:44.698 real 28m11.447s 00:33:44.698 user 58m37.254s 00:33:44.698 sys 9m16.280s 00:33:44.698 18:41:38 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:44.698 18:41:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:44.698 ************************************ 00:33:44.698 END TEST nvmf_tcp 00:33:44.698 ************************************ 00:33:44.957 18:41:38 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:33:44.957 18:41:38 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:44.957 18:41:38 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:44.957 18:41:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:44.957 18:41:38 -- common/autotest_common.sh@10 -- # set +x 00:33:44.957 ************************************ 00:33:44.957 START TEST spdkcli_nvmf_tcp 00:33:44.957 ************************************ 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:44.957 * Looking for test storage... 00:33:44.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:44.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.957 --rc genhtml_branch_coverage=1 00:33:44.957 --rc genhtml_function_coverage=1 00:33:44.957 --rc genhtml_legend=1 00:33:44.957 --rc geninfo_all_blocks=1 00:33:44.957 --rc geninfo_unexecuted_blocks=1 00:33:44.957 00:33:44.957 ' 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:44.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.957 --rc genhtml_branch_coverage=1 00:33:44.957 --rc genhtml_function_coverage=1 00:33:44.957 --rc genhtml_legend=1 00:33:44.957 --rc geninfo_all_blocks=1 00:33:44.957 --rc geninfo_unexecuted_blocks=1 00:33:44.957 00:33:44.957 ' 00:33:44.957 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:44.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.957 --rc genhtml_branch_coverage=1 00:33:44.958 --rc genhtml_function_coverage=1 00:33:44.958 --rc genhtml_legend=1 00:33:44.958 --rc geninfo_all_blocks=1 00:33:44.958 --rc geninfo_unexecuted_blocks=1 00:33:44.958 00:33:44.958 ' 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:44.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.958 --rc genhtml_branch_coverage=1 00:33:44.958 --rc genhtml_function_coverage=1 00:33:44.958 --rc genhtml_legend=1 00:33:44.958 --rc geninfo_all_blocks=1 00:33:44.958 --rc geninfo_unexecuted_blocks=1 00:33:44.958 00:33:44.958 ' 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:44.958 18:41:38 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:45.224 18:41:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:45.224 18:41:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:45.224 18:41:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:45.224 18:41:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:45.224 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:45.225 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:45.225 18:41:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:45.225 18:41:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=666752 00:33:45.225 18:41:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 666752 00:33:45.225 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 666752 ']' 00:33:45.225 18:41:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:45.225 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:45.225 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:45.225 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:45.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:45.225 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:45.225 18:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:45.225 [2024-10-08 18:41:38.334036] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:33:45.225 [2024-10-08 18:41:38.334086] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666752 ] 00:33:45.225 [2024-10-08 18:41:38.402771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:45.225 [2024-10-08 18:41:38.474805] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.225 [2024-10-08 18:41:38.474807] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.162 18:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:46.162 18:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:33:46.162 18:41:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:46.162 18:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:46.162 18:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:46.162 18:41:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:46.162 18:41:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:46.162 18:41:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:46.162 18:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:46.162 18:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:46.162 18:41:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:46.162 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:46.162 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:46.162 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:46.162 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:46.162 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:46.162 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:46.162 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:46.162 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:46.162 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:46.162 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:46.162 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:46.162 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:46.162 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:46.162 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:46.162 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:46.162 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:46.162 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:46.162 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:46.162 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:46.162 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:46.162 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:46.162 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:46.162 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:46.162 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:46.162 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:46.162 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:46.162 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:46.162 ' 00:33:48.694 [2024-10-08 18:41:41.891441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.141 [2024-10-08 18:41:43.223877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:52.746 [2024-10-08 18:41:45.699472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:54.649 [2024-10-08 18:41:47.882197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:56.551 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:56.551 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:56.551 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:56.551 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:56.551 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:56.551 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:56.551 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:56.551 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:56.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:56.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:56.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:56.551 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:56.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:56.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:56.551 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:56.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:56.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:56.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:56.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:56.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:56.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:56.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:56.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:56.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:56.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:56.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:56.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:56.551 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:56.552 18:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:56.552 18:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:56.552 18:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:56.552 18:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:56.552 18:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:56.552 18:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:56.552 18:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:56.552 18:41:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:56.810 18:41:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:56.810 18:41:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:56.810 18:41:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:56.810 18:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:56.810 18:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:57.068 18:41:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:57.068 18:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:57.068 18:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:57.068 18:41:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:57.068 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:57.068 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:57.068 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:57.068 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:57.068 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:57.068 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:57.068 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:57.068 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:57.068 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:57.068 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:57.068 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:57.068 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:57.068 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:57.068 ' 00:34:02.340 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:02.340 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:02.340 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:02.340 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:02.340 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:02.340 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:02.340 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:02.340 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:02.340 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:02.340 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:02.340 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:02.340 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:02.340 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:02.340 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:02.599 18:41:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:02.599 18:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:02.599 18:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:02.599 18:41:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 666752 00:34:02.599 18:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 666752 ']' 00:34:02.599 18:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 666752 00:34:02.599 18:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:34:02.599 18:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:02.599 18:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 666752 00:34:02.599 18:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:02.599 18:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:02.599 18:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 666752' 00:34:02.599 killing process with pid 666752 00:34:02.599 18:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 666752 00:34:02.599 18:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 666752 00:34:02.858 18:41:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:02.858 18:41:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:02.858 18:41:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 666752 ']' 00:34:02.858 18:41:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 666752 00:34:02.858 18:41:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 666752 ']' 00:34:02.858 18:41:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 666752 00:34:02.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (666752) - No such process 00:34:02.858 18:41:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 666752 is not found' 00:34:02.858 Process with pid 666752 is not found 00:34:02.858 18:41:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:02.858 18:41:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:02.858 18:41:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:02.858 00:34:02.858 real 0m17.974s 00:34:02.858 user 0m39.556s 00:34:02.858 sys 0m0.862s 00:34:02.858 18:41:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:02.858 18:41:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:02.858 ************************************ 00:34:02.858 END TEST spdkcli_nvmf_tcp 00:34:02.858 ************************************ 00:34:02.858 18:41:56 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:02.858 18:41:56 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:02.858 18:41:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:02.858 18:41:56 -- common/autotest_common.sh@10 -- # set +x 00:34:02.858 ************************************ 00:34:02.858 START TEST nvmf_identify_passthru 00:34:02.858 ************************************ 00:34:02.858 18:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:03.118 * Looking for test storage... 00:34:03.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:03.118 18:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:03.118 18:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:34:03.118 18:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:03.118 18:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:34:03.118 18:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:03.118 18:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:03.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.118 --rc genhtml_branch_coverage=1 00:34:03.118 --rc genhtml_function_coverage=1 00:34:03.118 --rc genhtml_legend=1 00:34:03.118 --rc geninfo_all_blocks=1 00:34:03.118 --rc geninfo_unexecuted_blocks=1 00:34:03.118 00:34:03.118 ' 00:34:03.118 18:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:03.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.118 --rc genhtml_branch_coverage=1 00:34:03.118 --rc genhtml_function_coverage=1 00:34:03.118 --rc genhtml_legend=1 00:34:03.118 --rc geninfo_all_blocks=1 00:34:03.118 --rc geninfo_unexecuted_blocks=1 00:34:03.118 00:34:03.118 ' 00:34:03.118 18:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:03.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.118 --rc genhtml_branch_coverage=1 00:34:03.118 --rc genhtml_function_coverage=1 00:34:03.118 --rc genhtml_legend=1 00:34:03.118 --rc geninfo_all_blocks=1 00:34:03.118 --rc geninfo_unexecuted_blocks=1 00:34:03.118 00:34:03.118 ' 00:34:03.118 18:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:03.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.118 --rc genhtml_branch_coverage=1 00:34:03.118 --rc genhtml_function_coverage=1 00:34:03.118 --rc genhtml_legend=1 00:34:03.118 --rc geninfo_all_blocks=1 00:34:03.118 --rc geninfo_unexecuted_blocks=1 00:34:03.118 00:34:03.118 ' 00:34:03.118 18:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:03.118 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.118 18:41:56 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.118 18:41:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.118 18:41:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.119 18:41:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.119 18:41:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:03.119 18:41:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:03.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:03.119 18:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:03.119 18:41:56 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:03.119 18:41:56 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:03.119 18:41:56 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.119 18:41:56 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.119 18:41:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.119 18:41:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.119 18:41:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.119 18:41:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:03.119 18:41:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.119 18:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.119 18:41:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:03.119 18:41:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:03.119 18:41:56 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:34:03.119 18:41:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:09.685 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:09.686 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:09.686 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:09.686 Found net devices under 0000:86:00.0: cvl_0_0 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:09.686 Found net devices under 0000:86:00.1: cvl_0_1 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:09.686 18:42:01 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:09.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:09.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:34:09.686 00:34:09.686 --- 10.0.0.2 ping statistics --- 00:34:09.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.686 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:09.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:09.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:34:09.686 00:34:09.686 --- 10.0.0.1 ping statistics --- 00:34:09.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.686 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:09.686 18:42:02 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:09.686 18:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:09.686 18:42:02 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:09.686 18:42:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.686 18:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:09.686 18:42:02 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:34:09.686 18:42:02 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:34:09.686 18:42:02 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:34:09.686 18:42:02 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:34:09.686 18:42:02 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:34:09.686 18:42:02 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:34:09.686 18:42:02 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:09.686 18:42:02 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:09.686 18:42:02 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:34:09.686 18:42:02 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:34:09.686 18:42:02 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:34:09.686 18:42:02 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:5e:00.0 00:34:09.686 18:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:34:09.686 18:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:34:09.686 18:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:09.686 18:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:09.686 18:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:13.875 18:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:34:13.875 18:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:13.875 18:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:13.875 18:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:19.146 18:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:19.146 18:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:19.146 18:42:11 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:19.146 18:42:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.146 18:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:19.146 18:42:11 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:19.146 18:42:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.147 18:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=674753 00:34:19.147 18:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:19.147 18:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:19.147 18:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 674753 00:34:19.147 18:42:11 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 674753 ']' 00:34:19.147 18:42:11 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.147 18:42:11 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:19.147 18:42:11 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.147 18:42:11 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:19.147 18:42:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.147 [2024-10-08 18:42:11.895642] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:34:19.147 [2024-10-08 18:42:11.895694] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.147 [2024-10-08 18:42:11.968263] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:19.147 [2024-10-08 18:42:12.046325] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.147 [2024-10-08 18:42:12.046365] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.147 [2024-10-08 18:42:12.046373] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.147 [2024-10-08 18:42:12.046385] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.147 [2024-10-08 18:42:12.046390] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.147 [2024-10-08 18:42:12.047841] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.147 [2024-10-08 18:42:12.047949] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:34:19.147 [2024-10-08 18:42:12.048056] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.147 [2024-10-08 18:42:12.048058] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:34:19.405 18:42:12 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:19.405 18:42:12 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:34:19.405 18:42:12 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:19.405 18:42:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.405 18:42:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.405 INFO: Log level set to 20 00:34:19.405 INFO: Requests: 00:34:19.405 { 00:34:19.405 "jsonrpc": "2.0", 00:34:19.405 "method": "nvmf_set_config", 00:34:19.405 "id": 1, 00:34:19.405 "params": { 00:34:19.405 "admin_cmd_passthru": { 00:34:19.405 "identify_ctrlr": true 00:34:19.405 } 00:34:19.405 } 00:34:19.405 } 00:34:19.405 00:34:19.664 INFO: response: 00:34:19.664 { 00:34:19.664 "jsonrpc": "2.0", 00:34:19.664 "id": 1, 00:34:19.664 "result": true 00:34:19.664 } 00:34:19.664 00:34:19.664 18:42:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.664 18:42:12 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:19.664 18:42:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.664 18:42:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.664 INFO: Setting log level to 20 00:34:19.664 INFO: Setting log level to 20 00:34:19.664 INFO: Log level set to 20 00:34:19.664 INFO: Log level set to 20 00:34:19.664 INFO: Requests: 00:34:19.664 { 00:34:19.664 "jsonrpc": "2.0", 00:34:19.664 "method": "framework_start_init", 00:34:19.664 "id": 1 00:34:19.664 } 00:34:19.664 00:34:19.664 INFO: Requests: 00:34:19.664 { 00:34:19.664 "jsonrpc": "2.0", 00:34:19.664 "method": "framework_start_init", 00:34:19.664 "id": 1 00:34:19.664 } 00:34:19.664 00:34:19.664 [2024-10-08 18:42:12.820103] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:19.664 INFO: response: 00:34:19.664 { 00:34:19.664 "jsonrpc": "2.0", 00:34:19.664 "id": 1, 00:34:19.664 "result": true 00:34:19.664 } 00:34:19.664 00:34:19.664 INFO: response: 00:34:19.664 { 00:34:19.664 "jsonrpc": "2.0", 00:34:19.664 "id": 1, 00:34:19.664 "result": true 00:34:19.664 } 00:34:19.664 00:34:19.664 18:42:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.664 18:42:12 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:19.664 18:42:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.664 18:42:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.664 INFO: Setting log level to 40 00:34:19.664 INFO: Setting log level to 40 00:34:19.664 INFO: Setting log level to 40 00:34:19.664 [2024-10-08 18:42:12.833877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.664 18:42:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.664 18:42:12 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:19.664 18:42:12 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:19.664 18:42:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.664 18:42:12 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:34:19.664 18:42:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.664 18:42:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.949 Nvme0n1 00:34:22.949 18:42:15 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.949 18:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:22.949 18:42:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.949 18:42:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.949 18:42:15 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.949 18:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:22.949 18:42:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.949 18:42:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.949 18:42:15 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.949 18:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.949 18:42:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.949 18:42:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.949 [2024-10-08 18:42:15.738927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.949 18:42:15 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.949 18:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:22.949 18:42:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.949 18:42:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.949 [ 00:34:22.949 { 00:34:22.949 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:22.949 "subtype": "Discovery", 00:34:22.949 "listen_addresses": [], 00:34:22.949 "allow_any_host": true, 00:34:22.949 "hosts": [] 00:34:22.949 }, 00:34:22.949 { 00:34:22.949 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:22.949 "subtype": "NVMe", 00:34:22.949 "listen_addresses": [ 00:34:22.949 { 00:34:22.949 "trtype": "TCP", 00:34:22.949 "adrfam": "IPv4", 00:34:22.949 "traddr": "10.0.0.2", 00:34:22.949 "trsvcid": "4420" 00:34:22.950 } 00:34:22.950 ], 00:34:22.950 "allow_any_host": true, 00:34:22.950 "hosts": [], 00:34:22.950 "serial_number": "SPDK00000000000001", 00:34:22.950 "model_number": "SPDK bdev Controller", 00:34:22.950 "max_namespaces": 1, 00:34:22.950 "min_cntlid": 1, 00:34:22.950 "max_cntlid": 65519, 00:34:22.950 "namespaces": [ 00:34:22.950 { 00:34:22.950 "nsid": 1, 00:34:22.950 "bdev_name": "Nvme0n1", 00:34:22.950 "name": "Nvme0n1", 00:34:22.950 "nguid": "C8447401E2514E7FB3F21553D25F2841", 00:34:22.950 "uuid": "c8447401-e251-4e7f-b3f2-1553d25f2841" 00:34:22.950 } 00:34:22.950 ] 00:34:22.950 } 00:34:22.950 ] 00:34:22.950 18:42:15 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.950 18:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:22.950 18:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:22.950 18:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:22.950 18:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:34:22.950 18:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:22.950 18:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:22.950 18:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:22.950 18:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:22.950 18:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:34:22.950 18:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:22.950 18:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:22.950 18:42:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.950 18:42:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.950 18:42:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.950 18:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:22.950 18:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:22.950 18:42:16 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:22.950 18:42:16 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:22.950 18:42:16 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:22.950 18:42:16 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:22.950 18:42:16 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:22.950 18:42:16 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:22.950 rmmod nvme_tcp 00:34:22.950 rmmod nvme_fabrics 00:34:22.950 rmmod nvme_keyring 00:34:22.950 18:42:16 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:22.950 18:42:16 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:22.950 18:42:16 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:22.950 18:42:16 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 674753 ']' 00:34:22.950 18:42:16 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 674753 00:34:22.950 18:42:16 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 674753 ']' 00:34:22.950 18:42:16 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 674753 00:34:22.950 18:42:16 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:34:22.950 18:42:16 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:22.950 18:42:16 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 674753 00:34:22.950 18:42:16 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:22.950 18:42:16 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:22.950 18:42:16 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 674753' 00:34:22.950 killing process with pid 674753 00:34:22.950 18:42:16 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 674753 00:34:22.950 18:42:16 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 674753 00:34:24.852 18:42:18 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:24.852 18:42:18 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:24.852 18:42:18 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:24.852 18:42:18 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:24.852 18:42:18 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:34:24.852 18:42:18 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:24.852 18:42:18 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:34:24.852 18:42:18 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:24.852 18:42:18 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:24.852 18:42:18 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.852 18:42:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:24.852 18:42:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.388 18:42:20 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:27.388 00:34:27.388 real 0m24.111s 00:34:27.388 user 0m32.261s 00:34:27.388 sys 0m6.296s 00:34:27.388 18:42:20 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:27.388 18:42:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:27.388 ************************************ 00:34:27.388 END TEST nvmf_identify_passthru 00:34:27.388 ************************************ 00:34:27.388 18:42:20 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:27.388 18:42:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:27.388 18:42:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:27.388 18:42:20 -- common/autotest_common.sh@10 -- # set +x 00:34:27.388 ************************************ 00:34:27.388 START TEST nvmf_dif 00:34:27.388 ************************************ 00:34:27.388 18:42:20 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:27.388 * Looking for test storage... 00:34:27.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:27.388 18:42:20 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:27.388 18:42:20 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:34:27.388 18:42:20 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:27.388 18:42:20 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:27.388 18:42:20 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:27.388 18:42:20 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:27.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.388 --rc genhtml_branch_coverage=1 00:34:27.388 --rc genhtml_function_coverage=1 00:34:27.388 --rc genhtml_legend=1 00:34:27.388 --rc geninfo_all_blocks=1 00:34:27.388 --rc geninfo_unexecuted_blocks=1 00:34:27.388 00:34:27.388 ' 00:34:27.388 18:42:20 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:27.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.388 --rc genhtml_branch_coverage=1 00:34:27.388 --rc genhtml_function_coverage=1 00:34:27.388 --rc genhtml_legend=1 00:34:27.388 --rc geninfo_all_blocks=1 00:34:27.388 --rc geninfo_unexecuted_blocks=1 00:34:27.388 00:34:27.388 ' 00:34:27.388 18:42:20 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:27.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.388 --rc genhtml_branch_coverage=1 00:34:27.388 --rc genhtml_function_coverage=1 00:34:27.388 --rc genhtml_legend=1 00:34:27.388 --rc geninfo_all_blocks=1 00:34:27.388 --rc geninfo_unexecuted_blocks=1 00:34:27.388 00:34:27.388 ' 00:34:27.388 18:42:20 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:27.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.388 --rc genhtml_branch_coverage=1 00:34:27.388 --rc genhtml_function_coverage=1 00:34:27.388 --rc genhtml_legend=1 00:34:27.388 --rc geninfo_all_blocks=1 00:34:27.388 --rc geninfo_unexecuted_blocks=1 00:34:27.388 00:34:27.388 ' 00:34:27.388 18:42:20 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.388 18:42:20 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.388 18:42:20 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.388 18:42:20 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.388 18:42:20 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.388 18:42:20 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:27.388 18:42:20 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:27.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:27.388 18:42:20 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:27.388 18:42:20 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:27.388 18:42:20 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:27.388 18:42:20 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:27.388 18:42:20 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:27.389 18:42:20 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:27.389 18:42:20 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:27.389 18:42:20 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:27.389 18:42:20 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:27.389 18:42:20 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:27.389 18:42:20 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:27.389 18:42:20 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.389 18:42:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:27.389 18:42:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.389 18:42:20 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:27.389 18:42:20 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:27.389 18:42:20 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:27.389 18:42:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:33.955 18:42:26 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:33.955 18:42:26 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:33.955 18:42:26 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:33.955 18:42:26 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:33.955 18:42:26 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:33.955 18:42:26 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:33.955 18:42:26 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:33.955 18:42:26 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:33.956 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:33.956 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:33.956 Found net devices under 0000:86:00.0: cvl_0_0 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:33.956 Found net devices under 0000:86:00.1: cvl_0_1 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:33.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:33.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:34:33.956 00:34:33.956 --- 10.0.0.2 ping statistics --- 00:34:33.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.956 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:33.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:33.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:34:33.956 00:34:33.956 --- 10.0.0.1 ping statistics --- 00:34:33.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.956 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:34:33.956 18:42:26 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:35.858 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:35.858 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:35.858 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:35.858 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:35.858 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:35.858 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:35.858 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:35.858 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:35.858 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:35.858 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:35.858 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:35.858 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:35.858 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:35.858 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:35.858 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:35.858 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:35.858 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:36.117 18:42:29 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:36.117 18:42:29 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:36.117 18:42:29 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:36.117 18:42:29 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:36.117 18:42:29 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:36.117 18:42:29 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:36.117 18:42:29 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:36.118 18:42:29 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:36.118 18:42:29 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:36.118 18:42:29 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:36.118 18:42:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.118 18:42:29 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=680447 00:34:36.118 18:42:29 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:36.118 18:42:29 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 680447 00:34:36.118 18:42:29 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 680447 ']' 00:34:36.118 18:42:29 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.118 18:42:29 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:36.118 18:42:29 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.118 18:42:29 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:36.118 18:42:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.118 [2024-10-08 18:42:29.351496] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:34:36.118 [2024-10-08 18:42:29.351540] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:36.118 [2024-10-08 18:42:29.424014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:36.377 [2024-10-08 18:42:29.502892] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:36.377 [2024-10-08 18:42:29.502925] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:36.377 [2024-10-08 18:42:29.502933] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:36.377 [2024-10-08 18:42:29.502939] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:36.377 [2024-10-08 18:42:29.502944] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:36.377 [2024-10-08 18:42:29.503516] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.944 18:42:30 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:36.944 18:42:30 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:34:36.944 18:42:30 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:36.944 18:42:30 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:36.944 18:42:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.944 18:42:30 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:36.944 18:42:30 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:36.944 18:42:30 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:36.944 18:42:30 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.944 18:42:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.944 [2024-10-08 18:42:30.227942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:36.944 18:42:30 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.944 18:42:30 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:36.944 18:42:30 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:36.944 18:42:30 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:36.944 18:42:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.944 ************************************ 00:34:36.944 START TEST fio_dif_1_default 00:34:36.944 ************************************ 00:34:36.944 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:37.203 bdev_null0 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:37.203 [2024-10-08 18:42:30.300256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:37.203 { 00:34:37.203 "params": { 00:34:37.203 "name": "Nvme$subsystem", 00:34:37.203 "trtype": "$TEST_TRANSPORT", 00:34:37.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:37.203 "adrfam": "ipv4", 00:34:37.203 "trsvcid": "$NVMF_PORT", 00:34:37.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:37.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:37.203 "hdgst": ${hdgst:-false}, 00:34:37.203 "ddgst": ${ddgst:-false} 00:34:37.203 }, 00:34:37.203 "method": "bdev_nvme_attach_controller" 00:34:37.203 } 00:34:37.203 EOF 00:34:37.203 )") 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:37.203 "params": { 00:34:37.203 "name": "Nvme0", 00:34:37.203 "trtype": "tcp", 00:34:37.203 "traddr": "10.0.0.2", 00:34:37.203 "adrfam": "ipv4", 00:34:37.203 "trsvcid": "4420", 00:34:37.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:37.203 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:37.203 "hdgst": false, 00:34:37.203 "ddgst": false 00:34:37.203 }, 00:34:37.203 "method": "bdev_nvme_attach_controller" 00:34:37.203 }' 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:37.203 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:37.204 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.204 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:37.204 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:37.204 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:37.204 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:37.204 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:37.204 18:42:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.461 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:37.461 fio-3.35 00:34:37.461 Starting 1 thread 00:34:49.665 00:34:49.665 filename0: (groupid=0, jobs=1): err= 0: pid=680847: Tue Oct 8 18:42:41 2024 00:34:49.665 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:34:49.665 slat (nsec): min=5719, max=26056, avg=6255.91, stdev=1588.80 00:34:49.665 clat (usec): min=40776, max=45707, avg=41014.10, stdev=322.94 00:34:49.665 lat (usec): min=40781, max=45733, avg=41020.36, stdev=323.39 00:34:49.665 clat percentiles (usec): 00:34:49.665 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:49.665 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:49.665 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:49.665 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:34:49.665 | 99.99th=[45876] 00:34:49.665 bw ( KiB/s): min= 384, max= 416, per=99.50%, avg=388.80, stdev=11.72, samples=20 00:34:49.665 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:49.665 lat (msec) : 50=100.00% 00:34:49.665 cpu : usr=92.91%, sys=6.85%, ctx=13, majf=0, minf=0 00:34:49.665 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.665 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.665 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:49.665 00:34:49.665 Run status group 0 (all jobs): 00:34:49.665 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10012-10012msec 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.665 00:34:49.665 real 0m11.185s 00:34:49.665 user 0m15.979s 00:34:49.665 sys 0m0.989s 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:49.665 ************************************ 00:34:49.665 END TEST fio_dif_1_default 00:34:49.665 ************************************ 00:34:49.665 18:42:41 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:49.665 18:42:41 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:49.665 18:42:41 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:49.665 18:42:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:49.665 ************************************ 00:34:49.665 START TEST fio_dif_1_multi_subsystems 00:34:49.665 ************************************ 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:49.665 bdev_null0 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:49.665 [2024-10-08 18:42:41.554834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:49.665 bdev_null1 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:49.665 { 00:34:49.665 "params": { 00:34:49.665 "name": "Nvme$subsystem", 00:34:49.665 "trtype": "$TEST_TRANSPORT", 00:34:49.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:49.665 "adrfam": "ipv4", 00:34:49.665 "trsvcid": "$NVMF_PORT", 00:34:49.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:49.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:49.665 "hdgst": ${hdgst:-false}, 00:34:49.665 "ddgst": ${ddgst:-false} 00:34:49.665 }, 00:34:49.665 "method": "bdev_nvme_attach_controller" 00:34:49.665 } 00:34:49.665 EOF 00:34:49.665 )") 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:49.665 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:49.665 { 00:34:49.665 "params": { 00:34:49.666 "name": "Nvme$subsystem", 00:34:49.666 "trtype": "$TEST_TRANSPORT", 00:34:49.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:49.666 "adrfam": "ipv4", 00:34:49.666 "trsvcid": "$NVMF_PORT", 00:34:49.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:49.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:49.666 "hdgst": ${hdgst:-false}, 00:34:49.666 "ddgst": ${ddgst:-false} 00:34:49.666 }, 00:34:49.666 "method": "bdev_nvme_attach_controller" 00:34:49.666 } 00:34:49.666 EOF 00:34:49.666 )") 00:34:49.666 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:49.666 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:49.666 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:34:49.666 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:34:49.666 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:34:49.666 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:49.666 "params": { 00:34:49.666 "name": "Nvme0", 00:34:49.666 "trtype": "tcp", 00:34:49.666 "traddr": "10.0.0.2", 00:34:49.666 "adrfam": "ipv4", 00:34:49.666 "trsvcid": "4420", 00:34:49.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:49.666 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:49.666 "hdgst": false, 00:34:49.666 "ddgst": false 00:34:49.666 }, 00:34:49.666 "method": "bdev_nvme_attach_controller" 00:34:49.666 },{ 00:34:49.666 "params": { 00:34:49.666 "name": "Nvme1", 00:34:49.666 "trtype": "tcp", 00:34:49.666 "traddr": "10.0.0.2", 00:34:49.666 "adrfam": "ipv4", 00:34:49.666 "trsvcid": "4420", 00:34:49.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:49.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:49.666 "hdgst": false, 00:34:49.666 "ddgst": false 00:34:49.666 }, 00:34:49.666 "method": "bdev_nvme_attach_controller" 00:34:49.666 }' 00:34:49.666 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:49.666 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:49.666 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:49.666 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:49.666 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:49.666 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:49.666 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:49.666 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:49.666 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:49.666 18:42:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:49.666 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:49.666 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:49.666 fio-3.35 00:34:49.666 Starting 2 threads 00:34:59.639 00:34:59.639 filename0: (groupid=0, jobs=1): err= 0: pid=682808: Tue Oct 8 18:42:52 2024 00:34:59.639 read: IOPS=189, BW=757KiB/s (775kB/s)(7584KiB/10015msec) 00:34:59.639 slat (nsec): min=5851, max=58739, avg=8918.84, stdev=6209.47 00:34:59.639 clat (usec): min=429, max=42569, avg=21100.74, stdev=20411.06 00:34:59.639 lat (usec): min=435, max=42614, avg=21109.66, stdev=20408.99 00:34:59.639 clat percentiles (usec): 00:34:59.640 | 1.00th=[ 449], 5.00th=[ 502], 10.00th=[ 545], 20.00th=[ 627], 00:34:59.640 | 30.00th=[ 635], 40.00th=[ 644], 50.00th=[41157], 60.00th=[41157], 00:34:59.640 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:34:59.640 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:59.640 | 99.99th=[42730] 00:34:59.640 bw ( KiB/s): min= 672, max= 768, per=65.54%, avg=756.80, stdev=28.00, samples=20 00:34:59.640 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:34:59.640 lat (usec) : 500=4.38%, 750=44.67%, 1000=0.74% 00:34:59.640 lat (msec) : 50=50.21% 00:34:59.640 cpu : usr=98.84%, sys=0.86%, ctx=27, majf=0, minf=115 00:34:59.640 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.640 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.640 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:59.640 filename1: (groupid=0, jobs=1): err= 0: pid=682809: Tue Oct 8 18:42:52 2024 00:34:59.640 read: IOPS=99, BW=397KiB/s (406kB/s)(3968KiB/10001msec) 00:34:59.640 slat (nsec): min=5976, max=64790, avg=12123.25, stdev=9731.02 00:34:59.640 clat (usec): min=374, max=42564, avg=40285.28, stdev=6299.62 00:34:59.640 lat (usec): min=381, max=42606, avg=40297.40, stdev=6299.63 00:34:59.640 clat percentiles (usec): 00:34:59.640 | 1.00th=[ 404], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:34:59.640 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:59.640 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:59.640 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:34:59.640 | 99.99th=[42730] 00:34:59.640 bw ( KiB/s): min= 352, max= 512, per=34.24%, avg=395.79, stdev=34.08, samples=19 00:34:59.640 iops : min= 88, max= 128, avg=98.95, stdev= 8.52, samples=19 00:34:59.640 lat (usec) : 500=2.42% 00:34:59.640 lat (msec) : 50=97.58% 00:34:59.640 cpu : usr=97.58%, sys=2.08%, ctx=33, majf=0, minf=191 00:34:59.640 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.640 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.640 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:59.640 00:34:59.640 Run status group 0 (all jobs): 00:34:59.640 READ: bw=1153KiB/s (1181kB/s), 397KiB/s-757KiB/s (406kB/s-775kB/s), io=11.3MiB (11.8MB), run=10001-10015msec 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.640 00:34:59.640 real 0m11.386s 00:34:59.640 user 0m27.351s 00:34:59.640 sys 0m0.597s 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:59.640 18:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:59.640 ************************************ 00:34:59.640 END TEST fio_dif_1_multi_subsystems 00:34:59.640 ************************************ 00:34:59.640 18:42:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:59.640 18:42:52 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:59.640 18:42:52 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:59.640 18:42:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:59.899 ************************************ 00:34:59.899 START TEST fio_dif_rand_params 00:34:59.899 ************************************ 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.899 bdev_null0 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.899 18:42:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.899 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.899 18:42:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:59.899 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.899 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.899 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.899 18:42:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:59.899 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.899 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.899 [2024-10-08 18:42:53.016235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.899 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.899 18:42:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:59.899 18:42:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:59.899 18:42:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:59.899 18:42:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:59.900 { 00:34:59.900 "params": { 00:34:59.900 "name": "Nvme$subsystem", 00:34:59.900 "trtype": "$TEST_TRANSPORT", 00:34:59.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:59.900 "adrfam": "ipv4", 00:34:59.900 "trsvcid": "$NVMF_PORT", 00:34:59.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:59.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:59.900 "hdgst": ${hdgst:-false}, 00:34:59.900 "ddgst": ${ddgst:-false} 00:34:59.900 }, 00:34:59.900 "method": "bdev_nvme_attach_controller" 00:34:59.900 } 00:34:59.900 EOF 00:34:59.900 )") 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:59.900 "params": { 00:34:59.900 "name": "Nvme0", 00:34:59.900 "trtype": "tcp", 00:34:59.900 "traddr": "10.0.0.2", 00:34:59.900 "adrfam": "ipv4", 00:34:59.900 "trsvcid": "4420", 00:34:59.900 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:59.900 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:59.900 "hdgst": false, 00:34:59.900 "ddgst": false 00:34:59.900 }, 00:34:59.900 "method": "bdev_nvme_attach_controller" 00:34:59.900 }' 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:59.900 18:42:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:00.158 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:00.158 ... 00:35:00.158 fio-3.35 00:35:00.158 Starting 3 threads 00:35:06.725 00:35:06.725 filename0: (groupid=0, jobs=1): err= 0: pid=684769: Tue Oct 8 18:42:58 2024 00:35:06.725 read: IOPS=335, BW=42.0MiB/s (44.0MB/s)(212MiB/5043msec) 00:35:06.725 slat (nsec): min=6039, max=26739, avg=11111.31, stdev=2204.55 00:35:06.725 clat (usec): min=3165, max=49473, avg=8893.90, stdev=5496.54 00:35:06.725 lat (usec): min=3171, max=49482, avg=8905.01, stdev=5496.50 00:35:06.725 clat percentiles (usec): 00:35:06.725 | 1.00th=[ 3523], 5.00th=[ 5538], 10.00th=[ 6652], 20.00th=[ 7439], 00:35:06.725 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8356], 60.00th=[ 8586], 00:35:06.725 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[ 9765], 95.00th=[10421], 00:35:06.725 | 99.00th=[47449], 99.50th=[48497], 99.90th=[49546], 99.95th=[49546], 00:35:06.725 | 99.99th=[49546] 00:35:06.725 bw ( KiB/s): min=35584, max=47360, per=35.49%, avg=43289.60, stdev=3815.17, samples=10 00:35:06.725 iops : min= 278, max= 370, avg=338.20, stdev=29.81, samples=10 00:35:06.725 lat (msec) : 4=3.42%, 10=89.43%, 20=5.25%, 50=1.89% 00:35:06.725 cpu : usr=94.31%, sys=5.37%, ctx=7, majf=0, minf=93 00:35:06.725 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:06.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.725 issued rwts: total=1694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.725 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:06.725 filename0: (groupid=0, jobs=1): err= 0: pid=684770: Tue Oct 8 18:42:58 2024 00:35:06.725 read: IOPS=312, BW=39.1MiB/s (41.0MB/s)(197MiB/5045msec) 00:35:06.725 slat (nsec): min=6091, max=27272, avg=11307.73, stdev=2041.65 00:35:06.725 clat (usec): min=3492, max=52820, avg=9556.56, stdev=4861.25 00:35:06.725 lat (usec): min=3503, max=52833, avg=9567.87, stdev=4861.09 00:35:06.725 clat percentiles (usec): 00:35:06.725 | 1.00th=[ 4948], 5.00th=[ 6259], 10.00th=[ 7046], 20.00th=[ 8029], 00:35:06.725 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:35:06.725 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[10945], 95.00th=[11469], 00:35:06.725 | 99.00th=[45351], 99.50th=[49546], 99.90th=[52167], 99.95th=[52691], 00:35:06.725 | 99.99th=[52691] 00:35:06.725 bw ( KiB/s): min=33792, max=44032, per=33.04%, avg=40294.40, stdev=2784.01, samples=10 00:35:06.725 iops : min= 264, max= 344, avg=314.80, stdev=21.75, samples=10 00:35:06.725 lat (msec) : 4=0.76%, 10=73.87%, 20=23.91%, 50=1.14%, 100=0.32% 00:35:06.725 cpu : usr=94.73%, sys=4.98%, ctx=11, majf=0, minf=40 00:35:06.725 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:06.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.725 issued rwts: total=1577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.725 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:06.725 filename0: (groupid=0, jobs=1): err= 0: pid=684771: Tue Oct 8 18:42:58 2024 00:35:06.725 read: IOPS=304, BW=38.1MiB/s (39.9MB/s)(192MiB/5044msec) 00:35:06.725 slat (nsec): min=6112, max=26633, avg=11375.13, stdev=2036.16 00:35:06.725 clat (usec): min=4610, max=50117, avg=9809.72, stdev=4810.58 00:35:06.725 lat (usec): min=4616, max=50124, avg=9821.09, stdev=4810.60 00:35:06.725 clat percentiles (usec): 00:35:06.725 | 1.00th=[ 5473], 5.00th=[ 6194], 10.00th=[ 6915], 20.00th=[ 8291], 00:35:06.725 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9765], 00:35:06.725 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10945], 95.00th=[11469], 00:35:06.725 | 99.00th=[45876], 99.50th=[47449], 99.90th=[50070], 99.95th=[50070], 00:35:06.725 | 99.99th=[50070] 00:35:06.725 bw ( KiB/s): min=28160, max=43776, per=32.20%, avg=39270.40, stdev=4292.53, samples=10 00:35:06.725 iops : min= 220, max= 342, avg=306.80, stdev=33.54, samples=10 00:35:06.725 lat (msec) : 10=66.67%, 20=31.84%, 50=1.43%, 100=0.07% 00:35:06.725 cpu : usr=94.49%, sys=5.20%, ctx=7, majf=0, minf=15 00:35:06.725 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:06.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.725 issued rwts: total=1536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.725 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:06.725 00:35:06.725 Run status group 0 (all jobs): 00:35:06.725 READ: bw=119MiB/s (125MB/s), 38.1MiB/s-42.0MiB/s (39.9MB/s-44.0MB/s), io=601MiB (630MB), run=5043-5045msec 00:35:06.725 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:06.725 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:06.725 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:06.725 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.726 bdev_null0 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.726 [2024-10-08 18:42:59.148286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.726 bdev_null1 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.726 bdev_null2 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:06.726 { 00:35:06.726 "params": { 00:35:06.726 "name": "Nvme$subsystem", 00:35:06.726 "trtype": "$TEST_TRANSPORT", 00:35:06.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:06.726 "adrfam": "ipv4", 00:35:06.726 "trsvcid": "$NVMF_PORT", 00:35:06.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:06.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:06.726 "hdgst": ${hdgst:-false}, 00:35:06.726 "ddgst": ${ddgst:-false} 00:35:06.726 }, 00:35:06.726 "method": "bdev_nvme_attach_controller" 00:35:06.726 } 00:35:06.726 EOF 00:35:06.726 )") 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:06.726 { 00:35:06.726 "params": { 00:35:06.726 "name": "Nvme$subsystem", 00:35:06.726 "trtype": "$TEST_TRANSPORT", 00:35:06.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:06.726 "adrfam": "ipv4", 00:35:06.726 "trsvcid": "$NVMF_PORT", 00:35:06.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:06.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:06.726 "hdgst": ${hdgst:-false}, 00:35:06.726 "ddgst": ${ddgst:-false} 00:35:06.726 }, 00:35:06.726 "method": "bdev_nvme_attach_controller" 00:35:06.726 } 00:35:06.726 EOF 00:35:06.726 )") 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:06.726 18:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:06.727 { 00:35:06.727 "params": { 00:35:06.727 "name": "Nvme$subsystem", 00:35:06.727 "trtype": "$TEST_TRANSPORT", 00:35:06.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:06.727 "adrfam": "ipv4", 00:35:06.727 "trsvcid": "$NVMF_PORT", 00:35:06.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:06.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:06.727 "hdgst": ${hdgst:-false}, 00:35:06.727 "ddgst": ${ddgst:-false} 00:35:06.727 }, 00:35:06.727 "method": "bdev_nvme_attach_controller" 00:35:06.727 } 00:35:06.727 EOF 00:35:06.727 )") 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:06.727 "params": { 00:35:06.727 "name": "Nvme0", 00:35:06.727 "trtype": "tcp", 00:35:06.727 "traddr": "10.0.0.2", 00:35:06.727 "adrfam": "ipv4", 00:35:06.727 "trsvcid": "4420", 00:35:06.727 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:06.727 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:06.727 "hdgst": false, 00:35:06.727 "ddgst": false 00:35:06.727 }, 00:35:06.727 "method": "bdev_nvme_attach_controller" 00:35:06.727 },{ 00:35:06.727 "params": { 00:35:06.727 "name": "Nvme1", 00:35:06.727 "trtype": "tcp", 00:35:06.727 "traddr": "10.0.0.2", 00:35:06.727 "adrfam": "ipv4", 00:35:06.727 "trsvcid": "4420", 00:35:06.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:06.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:06.727 "hdgst": false, 00:35:06.727 "ddgst": false 00:35:06.727 }, 00:35:06.727 "method": "bdev_nvme_attach_controller" 00:35:06.727 },{ 00:35:06.727 "params": { 00:35:06.727 "name": "Nvme2", 00:35:06.727 "trtype": "tcp", 00:35:06.727 "traddr": "10.0.0.2", 00:35:06.727 "adrfam": "ipv4", 00:35:06.727 "trsvcid": "4420", 00:35:06.727 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:06.727 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:06.727 "hdgst": false, 00:35:06.727 "ddgst": false 00:35:06.727 }, 00:35:06.727 "method": "bdev_nvme_attach_controller" 00:35:06.727 }' 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:06.727 18:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:06.727 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:06.727 ... 00:35:06.727 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:06.727 ... 00:35:06.727 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:06.727 ... 00:35:06.727 fio-3.35 00:35:06.727 Starting 24 threads 00:35:18.993 00:35:18.993 filename0: (groupid=0, jobs=1): err= 0: pid=685936: Tue Oct 8 18:43:10 2024 00:35:18.993 read: IOPS=531, BW=2127KiB/s (2178kB/s)(20.8MiB/10020msec) 00:35:18.993 slat (nsec): min=7565, max=97965, avg=31740.75, stdev=21870.44 00:35:18.993 clat (usec): min=6203, max=45246, avg=29858.02, stdev=1821.34 00:35:18.993 lat (usec): min=6211, max=45289, avg=29889.77, stdev=1821.26 00:35:18.993 clat percentiles (usec): 00:35:18.993 | 1.00th=[28443], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:35:18.993 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:35:18.993 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:35:18.993 | 99.00th=[31327], 99.50th=[34341], 99.90th=[44827], 99.95th=[45351], 00:35:18.993 | 99.99th=[45351] 00:35:18.993 bw ( KiB/s): min= 2048, max= 2180, per=4.18%, avg=2129.05, stdev=63.61, samples=19 00:35:18.993 iops : min= 512, max= 545, avg=532.26, stdev=15.90, samples=19 00:35:18.993 lat (msec) : 10=0.30%, 20=0.30%, 50=99.40% 00:35:18.993 cpu : usr=98.55%, sys=1.06%, ctx=16, majf=0, minf=28 00:35:18.993 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.993 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.993 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.993 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.993 filename0: (groupid=0, jobs=1): err= 0: pid=685937: Tue Oct 8 18:43:10 2024 00:35:18.993 read: IOPS=530, BW=2124KiB/s (2175kB/s)(20.8MiB/10005msec) 00:35:18.993 slat (usec): min=5, max=114, avg=42.42, stdev=17.39 00:35:18.993 clat (usec): min=8856, max=52849, avg=29717.61, stdev=1879.52 00:35:18.993 lat (usec): min=8872, max=52867, avg=29760.03, stdev=1880.73 00:35:18.993 clat percentiles (usec): 00:35:18.993 | 1.00th=[28967], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:35:18.993 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:35:18.993 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:35:18.993 | 99.00th=[30802], 99.50th=[33817], 99.90th=[52691], 99.95th=[52691], 00:35:18.993 | 99.99th=[52691] 00:35:18.993 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2115.53, stdev=77.89, samples=19 00:35:18.993 iops : min= 480, max= 544, avg=528.84, stdev=19.58, samples=19 00:35:18.993 lat (msec) : 10=0.30%, 20=0.30%, 50=99.10%, 100=0.30% 00:35:18.993 cpu : usr=98.70%, sys=0.89%, ctx=14, majf=0, minf=36 00:35:18.993 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.994 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.994 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.994 filename0: (groupid=0, jobs=1): err= 0: pid=685938: Tue Oct 8 18:43:10 2024 00:35:18.994 read: IOPS=538, BW=2156KiB/s (2207kB/s)(21.1MiB/10005msec) 00:35:18.994 slat (nsec): min=5985, max=93482, avg=13399.61, stdev=8798.60 00:35:18.994 clat (usec): min=6367, max=65026, avg=29569.01, stdev=3826.99 00:35:18.994 lat (usec): min=6375, max=65044, avg=29582.41, stdev=3826.78 00:35:18.994 clat percentiles (usec): 00:35:18.994 | 1.00th=[16581], 5.00th=[23200], 10.00th=[29754], 20.00th=[29754], 00:35:18.994 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:35:18.994 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:35:18.994 | 99.00th=[37487], 99.50th=[44303], 99.90th=[64750], 99.95th=[64750], 00:35:18.994 | 99.99th=[65274] 00:35:18.994 bw ( KiB/s): min= 1923, max= 2336, per=4.19%, avg=2132.37, stdev=97.38, samples=19 00:35:18.994 iops : min= 480, max= 584, avg=533.05, stdev=24.43, samples=19 00:35:18.994 lat (msec) : 10=0.78%, 20=1.82%, 50=97.11%, 100=0.30% 00:35:18.994 cpu : usr=98.68%, sys=0.94%, ctx=5, majf=0, minf=35 00:35:18.994 IO depths : 1=5.1%, 2=10.3%, 4=21.4%, 8=55.3%, 16=7.8%, 32=0.0%, >=64=0.0% 00:35:18.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.994 complete : 0=0.0%, 4=93.2%, 8=1.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.994 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.994 filename0: (groupid=0, jobs=1): err= 0: pid=685939: Tue Oct 8 18:43:10 2024 00:35:18.994 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10008msec) 00:35:18.994 slat (nsec): min=7660, max=96397, avg=39934.12, stdev=21481.96 00:35:18.994 clat (usec): min=17667, max=62047, avg=29827.06, stdev=1928.54 00:35:18.994 lat (usec): min=17690, max=62064, avg=29866.99, stdev=1928.78 00:35:18.994 clat percentiles (usec): 00:35:18.994 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:35:18.994 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:35:18.994 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:35:18.994 | 99.00th=[31065], 99.50th=[34341], 99.90th=[62129], 99.95th=[62129], 00:35:18.994 | 99.99th=[62129] 00:35:18.994 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2115.37, stdev=78.31, samples=19 00:35:18.994 iops : min= 480, max= 544, avg=528.84, stdev=19.58, samples=19 00:35:18.994 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:35:18.994 cpu : usr=98.59%, sys=1.02%, ctx=15, majf=0, minf=50 00:35:18.994 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.994 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.994 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.994 filename0: (groupid=0, jobs=1): err= 0: pid=685941: Tue Oct 8 18:43:10 2024 00:35:18.994 read: IOPS=530, BW=2124KiB/s (2174kB/s)(20.8MiB/10006msec) 00:35:18.994 slat (usec): min=6, max=103, avg=42.63, stdev=17.17 00:35:18.994 clat (usec): min=8859, max=53588, avg=29719.35, stdev=1907.11 00:35:18.994 lat (usec): min=8875, max=53606, avg=29761.98, stdev=1908.14 00:35:18.994 clat percentiles (usec): 00:35:18.994 | 1.00th=[28967], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:35:18.994 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:35:18.994 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:35:18.994 | 99.00th=[30802], 99.50th=[33817], 99.90th=[53740], 99.95th=[53740], 00:35:18.994 | 99.99th=[53740] 00:35:18.994 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2115.37, stdev=78.31, samples=19 00:35:18.994 iops : min= 480, max= 544, avg=528.84, stdev=19.58, samples=19 00:35:18.994 lat (msec) : 10=0.30%, 20=0.30%, 50=99.10%, 100=0.30% 00:35:18.994 cpu : usr=98.55%, sys=1.06%, ctx=8, majf=0, minf=32 00:35:18.994 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.994 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.994 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.994 filename0: (groupid=0, jobs=1): err= 0: pid=685942: Tue Oct 8 18:43:10 2024 00:35:18.994 read: IOPS=527, BW=2111KiB/s (2162kB/s)(20.6MiB/10004msec) 00:35:18.994 slat (nsec): min=7651, max=58868, avg=26986.33, stdev=9446.22 00:35:18.994 clat (usec): min=27048, max=80993, avg=30072.72, stdev=2677.48 00:35:18.994 lat (usec): min=27062, max=81014, avg=30099.71, stdev=2676.78 00:35:18.994 clat percentiles (usec): 00:35:18.994 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:35:18.994 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:35:18.994 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:35:18.994 | 99.00th=[31327], 99.50th=[34866], 99.90th=[78119], 99.95th=[78119], 00:35:18.994 | 99.99th=[81265] 00:35:18.994 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2108.37, stdev=78.08, samples=19 00:35:18.994 iops : min= 480, max= 544, avg=527.05, stdev=19.49, samples=19 00:35:18.994 lat (msec) : 50=99.70%, 100=0.30% 00:35:18.994 cpu : usr=98.69%, sys=0.95%, ctx=21, majf=0, minf=37 00:35:18.994 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.994 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.994 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.994 filename0: (groupid=0, jobs=1): err= 0: pid=685943: Tue Oct 8 18:43:10 2024 00:35:18.994 read: IOPS=529, BW=2116KiB/s (2167kB/s)(20.7MiB/10009msec) 00:35:18.994 slat (usec): min=8, max=107, avg=46.30, stdev=16.83 00:35:18.994 clat (usec): min=27398, max=53673, avg=29821.07, stdev=1366.09 00:35:18.994 lat (usec): min=27443, max=53689, avg=29867.37, stdev=1364.94 00:35:18.994 clat percentiles (usec): 00:35:18.994 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:35:18.994 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:35:18.994 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:35:18.994 | 99.00th=[30802], 99.50th=[33424], 99.90th=[53740], 99.95th=[53740], 00:35:18.994 | 99.99th=[53740] 00:35:18.994 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2115.37, stdev=78.31, samples=19 00:35:18.994 iops : min= 480, max= 544, avg=528.84, stdev=19.58, samples=19 00:35:18.994 lat (msec) : 50=99.70%, 100=0.30% 00:35:18.994 cpu : usr=98.69%, sys=0.92%, ctx=13, majf=0, minf=29 00:35:18.994 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.994 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.994 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.994 filename0: (groupid=0, jobs=1): err= 0: pid=685944: Tue Oct 8 18:43:10 2024 00:35:18.994 read: IOPS=529, BW=2118KiB/s (2169kB/s)(20.7MiB/10002msec) 00:35:18.994 slat (usec): min=7, max=102, avg=44.07, stdev=17.03 00:35:18.994 clat (usec): min=16812, max=59553, avg=29792.61, stdev=1818.33 00:35:18.994 lat (usec): min=16841, max=59565, avg=29836.68, stdev=1818.03 00:35:18.994 clat percentiles (usec): 00:35:18.994 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:35:18.994 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:35:18.994 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:35:18.994 | 99.00th=[30802], 99.50th=[33817], 99.90th=[59507], 99.95th=[59507], 00:35:18.994 | 99.99th=[59507] 00:35:18.994 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2115.37, stdev=78.31, samples=19 00:35:18.994 iops : min= 480, max= 544, avg=528.84, stdev=19.58, samples=19 00:35:18.994 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:35:18.994 cpu : usr=98.69%, sys=0.93%, ctx=14, majf=0, minf=45 00:35:18.994 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.994 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.994 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.994 filename1: (groupid=0, jobs=1): err= 0: pid=685945: Tue Oct 8 18:43:10 2024 00:35:18.994 read: IOPS=528, BW=2115KiB/s (2166kB/s)(20.7MiB/10014msec) 00:35:18.994 slat (nsec): min=7352, max=31936, avg=11833.36, stdev=3563.84 00:35:18.994 clat (usec): min=15698, max=63469, avg=30153.38, stdev=2098.40 00:35:18.994 lat (usec): min=15715, max=63490, avg=30165.21, stdev=2098.29 00:35:18.994 clat percentiles (usec): 00:35:18.994 | 1.00th=[29754], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:35:18.994 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:35:18.994 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:35:18.994 | 99.00th=[31589], 99.50th=[33817], 99.90th=[63177], 99.95th=[63701], 00:35:18.994 | 99.99th=[63701] 00:35:18.994 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2115.26, stdev=77.69, samples=19 00:35:18.994 iops : min= 480, max= 544, avg=528.74, stdev=19.50, samples=19 00:35:18.994 lat (msec) : 20=0.19%, 50=99.51%, 100=0.30% 00:35:18.994 cpu : usr=98.53%, sys=1.09%, ctx=10, majf=0, minf=58 00:35:18.994 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.994 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.994 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.994 filename1: (groupid=0, jobs=1): err= 0: pid=685946: Tue Oct 8 18:43:10 2024 00:35:18.994 read: IOPS=529, BW=2118KiB/s (2169kB/s)(20.7MiB/10011msec) 00:35:18.994 slat (usec): min=5, max=100, avg=38.58, stdev=21.83 00:35:18.994 clat (usec): min=10765, max=55902, avg=29783.12, stdev=1751.75 00:35:18.994 lat (usec): min=10776, max=55916, avg=29821.70, stdev=1752.73 00:35:18.994 clat percentiles (usec): 00:35:18.994 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:35:18.994 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:35:18.994 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:35:18.994 | 99.00th=[31065], 99.50th=[34341], 99.90th=[55837], 99.95th=[55837], 00:35:18.995 | 99.99th=[55837] 00:35:18.995 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2115.37, stdev=78.31, samples=19 00:35:18.995 iops : min= 480, max= 544, avg=528.84, stdev=19.58, samples=19 00:35:18.995 lat (msec) : 20=0.41%, 50=99.28%, 100=0.30% 00:35:18.995 cpu : usr=98.64%, sys=0.98%, ctx=18, majf=0, minf=34 00:35:18.995 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.995 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.995 issued rwts: total=5302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.995 filename1: (groupid=0, jobs=1): err= 0: pid=685947: Tue Oct 8 18:43:10 2024 00:35:18.995 read: IOPS=530, BW=2121KiB/s (2172kB/s)(20.8MiB/10017msec) 00:35:18.995 slat (nsec): min=7310, max=98858, avg=38798.41, stdev=22331.81 00:35:18.995 clat (usec): min=13637, max=54348, avg=29859.39, stdev=1455.97 00:35:18.995 lat (usec): min=13662, max=54375, avg=29898.19, stdev=1454.33 00:35:18.995 clat percentiles (usec): 00:35:18.995 | 1.00th=[28967], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:35:18.995 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:35:18.995 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:35:18.995 | 99.00th=[31065], 99.50th=[34866], 99.90th=[48497], 99.95th=[48497], 00:35:18.995 | 99.99th=[54264] 00:35:18.995 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2122.11, stdev=64.93, samples=19 00:35:18.995 iops : min= 512, max= 544, avg=530.53, stdev=16.23, samples=19 00:35:18.995 lat (msec) : 20=0.30%, 50=99.66%, 100=0.04% 00:35:18.995 cpu : usr=98.55%, sys=1.07%, ctx=7, majf=0, minf=32 00:35:18.995 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.995 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.995 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.995 filename1: (groupid=0, jobs=1): err= 0: pid=685948: Tue Oct 8 18:43:10 2024 00:35:18.995 read: IOPS=529, BW=2116KiB/s (2167kB/s)(20.7MiB/10009msec) 00:35:18.995 slat (usec): min=8, max=102, avg=45.91, stdev=16.60 00:35:18.995 clat (usec): min=18130, max=65510, avg=29815.43, stdev=1477.36 00:35:18.995 lat (usec): min=18147, max=65527, avg=29861.34, stdev=1476.67 00:35:18.995 clat percentiles (usec): 00:35:18.995 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:35:18.995 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:35:18.995 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:35:18.995 | 99.00th=[30802], 99.50th=[33817], 99.90th=[53740], 99.95th=[53740], 00:35:18.995 | 99.99th=[65274] 00:35:18.995 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2115.37, stdev=78.31, samples=19 00:35:18.995 iops : min= 480, max= 544, avg=528.84, stdev=19.58, samples=19 00:35:18.995 lat (msec) : 20=0.04%, 50=99.66%, 100=0.30% 00:35:18.995 cpu : usr=98.68%, sys=0.92%, ctx=12, majf=0, minf=29 00:35:18.995 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.995 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.995 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.995 filename1: (groupid=0, jobs=1): err= 0: pid=685949: Tue Oct 8 18:43:10 2024 00:35:18.995 read: IOPS=529, BW=2116KiB/s (2167kB/s)(20.7MiB/10009msec) 00:35:18.995 slat (usec): min=10, max=103, avg=45.18, stdev=16.46 00:35:18.995 clat (usec): min=27351, max=53721, avg=29825.10, stdev=1365.82 00:35:18.995 lat (usec): min=27381, max=53743, avg=29870.28, stdev=1365.01 00:35:18.995 clat percentiles (usec): 00:35:18.995 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:35:18.995 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:35:18.995 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:35:18.995 | 99.00th=[30802], 99.50th=[33817], 99.90th=[53740], 99.95th=[53740], 00:35:18.995 | 99.99th=[53740] 00:35:18.995 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2115.37, stdev=78.31, samples=19 00:35:18.995 iops : min= 480, max= 544, avg=528.84, stdev=19.58, samples=19 00:35:18.995 lat (msec) : 50=99.70%, 100=0.30% 00:35:18.995 cpu : usr=98.64%, sys=0.97%, ctx=13, majf=0, minf=27 00:35:18.995 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.995 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.995 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.995 filename1: (groupid=0, jobs=1): err= 0: pid=685950: Tue Oct 8 18:43:10 2024 00:35:18.995 read: IOPS=530, BW=2124KiB/s (2175kB/s)(20.8MiB/10005msec) 00:35:18.995 slat (usec): min=5, max=103, avg=43.44, stdev=16.65 00:35:18.995 clat (usec): min=8813, max=53238, avg=29721.59, stdev=1895.82 00:35:18.995 lat (usec): min=8828, max=53253, avg=29765.03, stdev=1896.68 00:35:18.995 clat percentiles (usec): 00:35:18.995 | 1.00th=[28967], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:35:18.995 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:35:18.995 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:35:18.995 | 99.00th=[30802], 99.50th=[33424], 99.90th=[53216], 99.95th=[53216], 00:35:18.995 | 99.99th=[53216] 00:35:18.995 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2115.53, stdev=77.89, samples=19 00:35:18.995 iops : min= 480, max= 544, avg=528.84, stdev=19.58, samples=19 00:35:18.995 lat (msec) : 10=0.30%, 20=0.30%, 50=99.10%, 100=0.30% 00:35:18.995 cpu : usr=98.51%, sys=1.11%, ctx=16, majf=0, minf=34 00:35:18.995 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.995 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.995 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.995 filename1: (groupid=0, jobs=1): err= 0: pid=685952: Tue Oct 8 18:43:10 2024 00:35:18.995 read: IOPS=541, BW=2167KiB/s (2219kB/s)(21.2MiB/10007msec) 00:35:18.995 slat (nsec): min=7388, max=94232, avg=17694.09, stdev=12223.07 00:35:18.995 clat (usec): min=6241, max=45191, avg=29385.41, stdev=3102.58 00:35:18.995 lat (usec): min=6253, max=45285, avg=29403.11, stdev=3104.02 00:35:18.995 clat percentiles (usec): 00:35:18.995 | 1.00th=[13829], 5.00th=[23725], 10.00th=[29492], 20.00th=[29754], 00:35:18.995 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:35:18.995 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:35:18.995 | 99.00th=[32375], 99.50th=[37487], 99.90th=[44827], 99.95th=[44827], 00:35:18.995 | 99.99th=[45351] 00:35:18.995 bw ( KiB/s): min= 2048, max= 2352, per=4.26%, avg=2166.11, stdev=91.66, samples=19 00:35:18.995 iops : min= 512, max= 588, avg=541.53, stdev=22.92, samples=19 00:35:18.995 lat (msec) : 10=0.81%, 20=1.51%, 50=97.68% 00:35:18.995 cpu : usr=98.47%, sys=1.13%, ctx=37, majf=0, minf=49 00:35:18.995 IO depths : 1=5.7%, 2=11.5%, 4=23.5%, 8=52.4%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:18.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.995 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.995 issued rwts: total=5422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.995 filename1: (groupid=0, jobs=1): err= 0: pid=685953: Tue Oct 8 18:43:10 2024 00:35:18.995 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10007msec) 00:35:18.995 slat (usec): min=6, max=105, avg=39.80, stdev=21.48 00:35:18.995 clat (usec): min=17518, max=62095, avg=29820.70, stdev=1935.32 00:35:18.995 lat (usec): min=17535, max=62113, avg=29860.49, stdev=1935.59 00:35:18.995 clat percentiles (usec): 00:35:18.995 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:35:18.995 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:35:18.995 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:35:18.995 | 99.00th=[31065], 99.50th=[34341], 99.90th=[62129], 99.95th=[62129], 00:35:18.995 | 99.99th=[62129] 00:35:18.995 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2115.37, stdev=78.31, samples=19 00:35:18.995 iops : min= 480, max= 544, avg=528.84, stdev=19.58, samples=19 00:35:18.995 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:35:18.995 cpu : usr=98.56%, sys=1.05%, ctx=15, majf=0, minf=36 00:35:18.995 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.995 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.995 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.995 filename2: (groupid=0, jobs=1): err= 0: pid=685954: Tue Oct 8 18:43:10 2024 00:35:18.995 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10005msec) 00:35:18.995 slat (nsec): min=6371, max=95155, avg=38595.48, stdev=21707.66 00:35:18.995 clat (usec): min=17657, max=62144, avg=29820.21, stdev=1785.60 00:35:18.995 lat (usec): min=17672, max=62164, avg=29858.81, stdev=1785.98 00:35:18.995 clat percentiles (usec): 00:35:18.995 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:35:18.995 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:35:18.995 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:35:18.995 | 99.00th=[31065], 99.50th=[34341], 99.90th=[58983], 99.95th=[58983], 00:35:18.995 | 99.99th=[62129] 00:35:18.995 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2115.37, stdev=78.31, samples=19 00:35:18.996 iops : min= 480, max= 544, avg=528.84, stdev=19.58, samples=19 00:35:18.996 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:35:18.996 cpu : usr=98.59%, sys=1.02%, ctx=14, majf=0, minf=30 00:35:18.996 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.996 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.996 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.996 filename2: (groupid=0, jobs=1): err= 0: pid=685955: Tue Oct 8 18:43:10 2024 00:35:18.996 read: IOPS=532, BW=2130KiB/s (2181kB/s)(20.8MiB/10021msec) 00:35:18.996 slat (nsec): min=7343, max=96365, avg=16952.11, stdev=12750.04 00:35:18.996 clat (usec): min=1497, max=45286, avg=29917.69, stdev=2062.90 00:35:18.996 lat (usec): min=1513, max=45323, avg=29934.65, stdev=2063.21 00:35:18.996 clat percentiles (usec): 00:35:18.996 | 1.00th=[27657], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:35:18.996 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:35:18.996 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:35:18.996 | 99.00th=[31327], 99.50th=[34341], 99.90th=[44827], 99.95th=[45351], 00:35:18.996 | 99.99th=[45351] 00:35:18.996 bw ( KiB/s): min= 2048, max= 2232, per=4.19%, avg=2131.79, stdev=66.95, samples=19 00:35:18.996 iops : min= 512, max= 558, avg=532.95, stdev=16.74, samples=19 00:35:18.996 lat (msec) : 2=0.13%, 10=0.30%, 20=0.30%, 50=99.27% 00:35:18.996 cpu : usr=98.44%, sys=1.17%, ctx=15, majf=0, minf=50 00:35:18.996 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.996 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.996 issued rwts: total=5335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.996 filename2: (groupid=0, jobs=1): err= 0: pid=685956: Tue Oct 8 18:43:10 2024 00:35:18.996 read: IOPS=529, BW=2116KiB/s (2167kB/s)(20.7MiB/10009msec) 00:35:18.996 slat (usec): min=9, max=107, avg=45.22, stdev=16.74 00:35:18.996 clat (usec): min=18602, max=53680, avg=29819.00, stdev=1395.88 00:35:18.996 lat (usec): min=18611, max=53714, avg=29864.22, stdev=1395.49 00:35:18.996 clat percentiles (usec): 00:35:18.996 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:35:18.996 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:35:18.996 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:35:18.996 | 99.00th=[30802], 99.50th=[33817], 99.90th=[53740], 99.95th=[53740], 00:35:18.996 | 99.99th=[53740] 00:35:18.996 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2115.37, stdev=78.31, samples=19 00:35:18.996 iops : min= 480, max= 544, avg=528.84, stdev=19.58, samples=19 00:35:18.996 lat (msec) : 20=0.04%, 50=99.66%, 100=0.30% 00:35:18.996 cpu : usr=98.66%, sys=0.95%, ctx=12, majf=0, minf=35 00:35:18.996 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.996 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.996 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.996 filename2: (groupid=0, jobs=1): err= 0: pid=685957: Tue Oct 8 18:43:10 2024 00:35:18.996 read: IOPS=529, BW=2116KiB/s (2167kB/s)(20.7MiB/10009msec) 00:35:18.996 slat (usec): min=7, max=106, avg=44.25, stdev=18.21 00:35:18.996 clat (usec): min=18095, max=65382, avg=29853.61, stdev=1473.44 00:35:18.996 lat (usec): min=18110, max=65410, avg=29897.87, stdev=1471.71 00:35:18.996 clat percentiles (usec): 00:35:18.996 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:35:18.996 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:35:18.996 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:35:18.996 | 99.00th=[31065], 99.50th=[33817], 99.90th=[53216], 99.95th=[53740], 00:35:18.996 | 99.99th=[65274] 00:35:18.996 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2115.37, stdev=78.31, samples=19 00:35:18.996 iops : min= 480, max= 544, avg=528.84, stdev=19.58, samples=19 00:35:18.996 lat (msec) : 20=0.04%, 50=99.66%, 100=0.30% 00:35:18.996 cpu : usr=98.59%, sys=1.02%, ctx=17, majf=0, minf=46 00:35:18.996 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.996 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.996 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.996 filename2: (groupid=0, jobs=1): err= 0: pid=685958: Tue Oct 8 18:43:10 2024 00:35:18.996 read: IOPS=529, BW=2116KiB/s (2167kB/s)(20.7MiB/10011msec) 00:35:18.996 slat (nsec): min=6519, max=96108, avg=40962.32, stdev=21310.31 00:35:18.996 clat (usec): min=17649, max=66320, avg=29833.44, stdev=2151.58 00:35:18.996 lat (usec): min=17664, max=66339, avg=29874.41, stdev=2151.17 00:35:18.996 clat percentiles (usec): 00:35:18.996 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:35:18.996 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:35:18.996 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:35:18.996 | 99.00th=[31065], 99.50th=[34866], 99.90th=[66323], 99.95th=[66323], 00:35:18.996 | 99.99th=[66323] 00:35:18.996 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2108.63, stdev=78.31, samples=19 00:35:18.996 iops : min= 480, max= 544, avg=527.16, stdev=19.58, samples=19 00:35:18.996 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:35:18.996 cpu : usr=98.74%, sys=0.86%, ctx=14, majf=0, minf=32 00:35:18.996 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.996 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.996 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.996 filename2: (groupid=0, jobs=1): err= 0: pid=685959: Tue Oct 8 18:43:10 2024 00:35:18.996 read: IOPS=532, BW=2131KiB/s (2182kB/s)(20.8MiB/10002msec) 00:35:18.996 slat (nsec): min=7611, max=59627, avg=22599.34, stdev=9953.40 00:35:18.996 clat (usec): min=5047, max=45315, avg=29848.42, stdev=2199.56 00:35:18.996 lat (usec): min=5091, max=45355, avg=29871.02, stdev=2199.53 00:35:18.996 clat percentiles (usec): 00:35:18.996 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:35:18.996 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:35:18.996 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:35:18.996 | 99.00th=[31065], 99.50th=[33817], 99.90th=[45351], 99.95th=[45351], 00:35:18.996 | 99.99th=[45351] 00:35:18.996 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2128.84, stdev=76.45, samples=19 00:35:18.996 iops : min= 512, max= 576, avg=532.21, stdev=19.11, samples=19 00:35:18.996 lat (msec) : 10=0.60%, 20=0.34%, 50=99.06% 00:35:18.996 cpu : usr=98.47%, sys=1.01%, ctx=117, majf=0, minf=47 00:35:18.996 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:18.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.996 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.996 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.996 filename2: (groupid=0, jobs=1): err= 0: pid=685960: Tue Oct 8 18:43:10 2024 00:35:18.996 read: IOPS=531, BW=2128KiB/s (2179kB/s)(20.8MiB/10004msec) 00:35:18.996 slat (usec): min=5, max=101, avg=41.40, stdev=20.17 00:35:18.996 clat (usec): min=9693, max=52959, avg=29685.77, stdev=2667.88 00:35:18.996 lat (usec): min=9716, max=52974, avg=29727.17, stdev=2669.38 00:35:18.996 clat percentiles (usec): 00:35:18.996 | 1.00th=[18482], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:35:18.996 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:35:18.996 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:35:18.996 | 99.00th=[37487], 99.50th=[47973], 99.90th=[52691], 99.95th=[52691], 00:35:18.996 | 99.99th=[53216] 00:35:18.996 bw ( KiB/s): min= 1923, max= 2176, per=4.16%, avg=2119.74, stdev=73.52, samples=19 00:35:18.996 iops : min= 480, max= 544, avg=529.89, stdev=18.49, samples=19 00:35:18.996 lat (msec) : 10=0.15%, 20=0.94%, 50=98.61%, 100=0.30% 00:35:18.996 cpu : usr=98.50%, sys=1.12%, ctx=14, majf=0, minf=27 00:35:18.996 IO depths : 1=5.4%, 2=11.0%, 4=22.6%, 8=53.4%, 16=7.5%, 32=0.0%, >=64=0.0% 00:35:18.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.996 complete : 0=0.0%, 4=93.5%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.996 issued rwts: total=5322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.996 filename2: (groupid=0, jobs=1): err= 0: pid=685961: Tue Oct 8 18:43:10 2024 00:35:18.996 read: IOPS=530, BW=2124KiB/s (2175kB/s)(20.8MiB/10004msec) 00:35:18.996 slat (usec): min=8, max=115, avg=43.82, stdev=16.79 00:35:18.996 clat (usec): min=9014, max=52700, avg=29719.28, stdev=2030.60 00:35:18.996 lat (usec): min=9022, max=52714, avg=29763.10, stdev=2032.75 00:35:18.996 clat percentiles (usec): 00:35:18.996 | 1.00th=[27657], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:35:18.996 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:35:18.996 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:35:18.996 | 99.00th=[31065], 99.50th=[40633], 99.90th=[52691], 99.95th=[52691], 00:35:18.996 | 99.99th=[52691] 00:35:18.996 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2115.53, stdev=77.89, samples=19 00:35:18.996 iops : min= 480, max= 544, avg=528.84, stdev=19.58, samples=19 00:35:18.996 lat (msec) : 10=0.30%, 20=0.56%, 50=98.83%, 100=0.30% 00:35:18.996 cpu : usr=98.64%, sys=0.98%, ctx=20, majf=0, minf=50 00:35:18.996 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:18.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.996 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.996 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.996 00:35:18.996 Run status group 0 (all jobs): 00:35:18.996 READ: bw=49.7MiB/s (52.1MB/s), 2111KiB/s-2167KiB/s (2162kB/s-2219kB/s), io=498MiB (522MB), run=10002-10021msec 00:35:18.996 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:18.996 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:18.996 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:18.996 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.997 bdev_null0 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.997 [2024-10-08 18:43:10.885651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.997 bdev_null1 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:18.997 { 00:35:18.997 "params": { 00:35:18.997 "name": "Nvme$subsystem", 00:35:18.997 "trtype": "$TEST_TRANSPORT", 00:35:18.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:18.997 "adrfam": "ipv4", 00:35:18.997 "trsvcid": "$NVMF_PORT", 00:35:18.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:18.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:18.997 "hdgst": ${hdgst:-false}, 00:35:18.997 "ddgst": ${ddgst:-false} 00:35:18.997 }, 00:35:18.997 "method": "bdev_nvme_attach_controller" 00:35:18.997 } 00:35:18.997 EOF 00:35:18.997 )") 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:18.997 { 00:35:18.997 "params": { 00:35:18.997 "name": "Nvme$subsystem", 00:35:18.997 "trtype": "$TEST_TRANSPORT", 00:35:18.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:18.997 "adrfam": "ipv4", 00:35:18.997 "trsvcid": "$NVMF_PORT", 00:35:18.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:18.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:18.997 "hdgst": ${hdgst:-false}, 00:35:18.997 "ddgst": ${ddgst:-false} 00:35:18.997 }, 00:35:18.997 "method": "bdev_nvme_attach_controller" 00:35:18.997 } 00:35:18.997 EOF 00:35:18.997 )") 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:18.997 18:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:18.998 18:43:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:18.998 18:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:35:18.998 18:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:35:18.998 18:43:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:18.998 "params": { 00:35:18.998 "name": "Nvme0", 00:35:18.998 "trtype": "tcp", 00:35:18.998 "traddr": "10.0.0.2", 00:35:18.998 "adrfam": "ipv4", 00:35:18.998 "trsvcid": "4420", 00:35:18.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:18.998 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:18.998 "hdgst": false, 00:35:18.998 "ddgst": false 00:35:18.998 }, 00:35:18.998 "method": "bdev_nvme_attach_controller" 00:35:18.998 },{ 00:35:18.998 "params": { 00:35:18.998 "name": "Nvme1", 00:35:18.998 "trtype": "tcp", 00:35:18.998 "traddr": "10.0.0.2", 00:35:18.998 "adrfam": "ipv4", 00:35:18.998 "trsvcid": "4420", 00:35:18.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:18.998 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:18.998 "hdgst": false, 00:35:18.998 "ddgst": false 00:35:18.998 }, 00:35:18.998 "method": "bdev_nvme_attach_controller" 00:35:18.998 }' 00:35:18.998 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:18.998 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:18.998 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:18.998 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:18.998 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:18.998 18:43:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:18.998 18:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:18.998 18:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:18.998 18:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:18.998 18:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:18.998 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:18.998 ... 00:35:18.998 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:18.998 ... 00:35:18.998 fio-3.35 00:35:18.998 Starting 4 threads 00:35:24.265 00:35:24.265 filename0: (groupid=0, jobs=1): err= 0: pid=687931: Tue Oct 8 18:43:17 2024 00:35:24.265 read: IOPS=2847, BW=22.2MiB/s (23.3MB/s)(111MiB/5002msec) 00:35:24.265 slat (nsec): min=5997, max=35302, avg=8736.37, stdev=3056.32 00:35:24.265 clat (usec): min=796, max=5008, avg=2782.74, stdev=391.38 00:35:24.265 lat (usec): min=812, max=5014, avg=2791.48, stdev=391.30 00:35:24.265 clat percentiles (usec): 00:35:24.265 | 1.00th=[ 1647], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2474], 00:35:24.265 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 2933], 00:35:24.265 | 70.00th=[ 2933], 80.00th=[ 2999], 90.00th=[ 3195], 95.00th=[ 3359], 00:35:24.265 | 99.00th=[ 3916], 99.50th=[ 4113], 99.90th=[ 4621], 99.95th=[ 4883], 00:35:24.265 | 99.99th=[ 5014] 00:35:24.265 bw ( KiB/s): min=21712, max=24416, per=26.35%, avg=22712.89, stdev=869.03, samples=9 00:35:24.265 iops : min= 2714, max= 3052, avg=2839.11, stdev=108.63, samples=9 00:35:24.265 lat (usec) : 1000=0.09% 00:35:24.265 lat (msec) : 2=2.39%, 4=96.88%, 10=0.65% 00:35:24.265 cpu : usr=95.72%, sys=3.96%, ctx=7, majf=0, minf=9 00:35:24.265 IO depths : 1=0.2%, 2=7.2%, 4=64.8%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.265 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.265 issued rwts: total=14241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.265 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:24.265 filename0: (groupid=0, jobs=1): err= 0: pid=687932: Tue Oct 8 18:43:17 2024 00:35:24.265 read: IOPS=2658, BW=20.8MiB/s (21.8MB/s)(104MiB/5001msec) 00:35:24.265 slat (nsec): min=5959, max=48412, avg=8879.32, stdev=3146.72 00:35:24.265 clat (usec): min=771, max=5502, avg=2984.03, stdev=495.11 00:35:24.265 lat (usec): min=780, max=5519, avg=2992.91, stdev=494.88 00:35:24.265 clat percentiles (usec): 00:35:24.265 | 1.00th=[ 1975], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2671], 00:35:24.265 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:35:24.265 | 70.00th=[ 3032], 80.00th=[ 3228], 90.00th=[ 3621], 95.00th=[ 4015], 00:35:24.265 | 99.00th=[ 4621], 99.50th=[ 4948], 99.90th=[ 5145], 99.95th=[ 5211], 00:35:24.265 | 99.99th=[ 5473] 00:35:24.265 bw ( KiB/s): min=19808, max=22096, per=24.68%, avg=21272.22, stdev=617.77, samples=9 00:35:24.265 iops : min= 2476, max= 2762, avg=2659.00, stdev=77.22, samples=9 00:35:24.265 lat (usec) : 1000=0.10% 00:35:24.265 lat (msec) : 2=1.03%, 4=93.91%, 10=4.96% 00:35:24.265 cpu : usr=95.98%, sys=3.70%, ctx=8, majf=0, minf=9 00:35:24.265 IO depths : 1=0.1%, 2=3.9%, 4=67.0%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.265 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.265 issued rwts: total=13294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.265 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:24.265 filename1: (groupid=0, jobs=1): err= 0: pid=687933: Tue Oct 8 18:43:17 2024 00:35:24.265 read: IOPS=2589, BW=20.2MiB/s (21.2MB/s)(101MiB/5001msec) 00:35:24.265 slat (nsec): min=6001, max=45741, avg=8599.37, stdev=3187.50 00:35:24.265 clat (usec): min=876, max=5445, avg=3063.98, stdev=462.84 00:35:24.265 lat (usec): min=888, max=5452, avg=3072.58, stdev=462.72 00:35:24.265 clat percentiles (usec): 00:35:24.265 | 1.00th=[ 2114], 5.00th=[ 2474], 10.00th=[ 2671], 20.00th=[ 2802], 00:35:24.265 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:35:24.265 | 70.00th=[ 3163], 80.00th=[ 3261], 90.00th=[ 3621], 95.00th=[ 3982], 00:35:24.265 | 99.00th=[ 4817], 99.50th=[ 5014], 99.90th=[ 5276], 99.95th=[ 5407], 00:35:24.265 | 99.99th=[ 5473] 00:35:24.265 bw ( KiB/s): min=19792, max=21808, per=24.14%, avg=20808.89, stdev=638.65, samples=9 00:35:24.265 iops : min= 2474, max= 2726, avg=2601.11, stdev=79.83, samples=9 00:35:24.265 lat (usec) : 1000=0.02% 00:35:24.265 lat (msec) : 2=0.56%, 4=94.73%, 10=4.70% 00:35:24.265 cpu : usr=95.90%, sys=3.80%, ctx=10, majf=0, minf=9 00:35:24.265 IO depths : 1=0.2%, 2=2.7%, 4=69.9%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.265 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.265 issued rwts: total=12950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.265 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:24.265 filename1: (groupid=0, jobs=1): err= 0: pid=687934: Tue Oct 8 18:43:17 2024 00:35:24.265 read: IOPS=2679, BW=20.9MiB/s (21.9MB/s)(105MiB/5002msec) 00:35:24.265 slat (nsec): min=5977, max=37642, avg=8966.08, stdev=3275.38 00:35:24.265 clat (usec): min=919, max=5415, avg=2959.75, stdev=431.71 00:35:24.265 lat (usec): min=926, max=5421, avg=2968.72, stdev=431.51 00:35:24.265 clat percentiles (usec): 00:35:24.265 | 1.00th=[ 1975], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2671], 00:35:24.265 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:35:24.265 | 70.00th=[ 3032], 80.00th=[ 3195], 90.00th=[ 3458], 95.00th=[ 3785], 00:35:24.265 | 99.00th=[ 4424], 99.50th=[ 4817], 99.90th=[ 5080], 99.95th=[ 5145], 00:35:24.265 | 99.99th=[ 5342] 00:35:24.265 bw ( KiB/s): min=20400, max=21856, per=24.90%, avg=21464.89, stdev=446.65, samples=9 00:35:24.265 iops : min= 2550, max= 2732, avg=2683.11, stdev=55.83, samples=9 00:35:24.265 lat (usec) : 1000=0.01% 00:35:24.265 lat (msec) : 2=1.19%, 4=95.72%, 10=3.09% 00:35:24.265 cpu : usr=96.18%, sys=3.52%, ctx=6, majf=0, minf=9 00:35:24.265 IO depths : 1=0.2%, 2=3.5%, 4=67.7%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.265 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.265 issued rwts: total=13402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.265 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:24.265 00:35:24.265 Run status group 0 (all jobs): 00:35:24.265 READ: bw=84.2MiB/s (88.3MB/s), 20.2MiB/s-22.2MiB/s (21.2MB/s-23.3MB/s), io=421MiB (441MB), run=5001-5002msec 00:35:24.265 18:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:24.265 18:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:24.265 18:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:24.265 18:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:24.265 18:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:24.265 18:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:24.265 18:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.265 18:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:24.265 18:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.265 18:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:24.265 18:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.266 18:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:24.266 18:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.266 18:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:24.266 18:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:24.266 18:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:24.266 18:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:24.266 18:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.266 18:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:24.266 18:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.266 18:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:24.266 18:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.266 18:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:24.266 18:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.266 00:35:24.266 real 0m24.527s 00:35:24.266 user 4m53.000s 00:35:24.266 sys 0m5.016s 00:35:24.266 18:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:24.266 18:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:24.266 ************************************ 00:35:24.266 END TEST fio_dif_rand_params 00:35:24.266 ************************************ 00:35:24.266 18:43:17 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:24.266 18:43:17 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:24.266 18:43:17 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:24.266 18:43:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:24.266 ************************************ 00:35:24.266 START TEST fio_dif_digest 00:35:24.266 ************************************ 00:35:24.266 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:35:24.266 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:24.266 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:24.266 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:24.266 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:24.266 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:24.266 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:24.266 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:24.266 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:24.266 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:24.524 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:24.524 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:24.524 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:24.524 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:24.524 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:24.524 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:24.524 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:24.524 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.524 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:24.525 bdev_null0 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:24.525 [2024-10-08 18:43:17.621035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:24.525 { 00:35:24.525 "params": { 00:35:24.525 "name": "Nvme$subsystem", 00:35:24.525 "trtype": "$TEST_TRANSPORT", 00:35:24.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:24.525 "adrfam": "ipv4", 00:35:24.525 "trsvcid": "$NVMF_PORT", 00:35:24.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:24.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:24.525 "hdgst": ${hdgst:-false}, 00:35:24.525 "ddgst": ${ddgst:-false} 00:35:24.525 }, 00:35:24.525 "method": "bdev_nvme_attach_controller" 00:35:24.525 } 00:35:24.525 EOF 00:35:24.525 )") 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:24.525 "params": { 00:35:24.525 "name": "Nvme0", 00:35:24.525 "trtype": "tcp", 00:35:24.525 "traddr": "10.0.0.2", 00:35:24.525 "adrfam": "ipv4", 00:35:24.525 "trsvcid": "4420", 00:35:24.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:24.525 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:24.525 "hdgst": true, 00:35:24.525 "ddgst": true 00:35:24.525 }, 00:35:24.525 "method": "bdev_nvme_attach_controller" 00:35:24.525 }' 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:24.525 18:43:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:24.784 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:24.784 ... 00:35:24.784 fio-3.35 00:35:24.784 Starting 3 threads 00:35:36.984 00:35:36.984 filename0: (groupid=0, jobs=1): err= 0: pid=689068: Tue Oct 8 18:43:28 2024 00:35:36.984 read: IOPS=294, BW=36.8MiB/s (38.5MB/s)(369MiB/10047msec) 00:35:36.984 slat (usec): min=6, max=104, avg=20.83, stdev= 7.07 00:35:36.984 clat (usec): min=7428, max=49469, avg=10165.50, stdev=1305.80 00:35:36.984 lat (usec): min=7442, max=49497, avg=10186.34, stdev=1306.74 00:35:36.984 clat percentiles (usec): 00:35:36.984 | 1.00th=[ 7963], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9503], 00:35:36.984 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:35:36.984 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:35:36.984 | 99.00th=[12256], 99.50th=[12518], 99.90th=[13042], 99.95th=[47973], 00:35:36.984 | 99.99th=[49546] 00:35:36.984 bw ( KiB/s): min=35328, max=44288, per=34.16%, avg=37785.60, stdev=1776.34, samples=20 00:35:36.984 iops : min= 276, max= 346, avg=295.20, stdev=13.88, samples=20 00:35:36.984 lat (msec) : 10=41.98%, 20=57.96%, 50=0.07% 00:35:36.984 cpu : usr=95.42%, sys=3.79%, ctx=192, majf=0, minf=195 00:35:36.984 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.984 issued rwts: total=2954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.984 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:36.984 filename0: (groupid=0, jobs=1): err= 0: pid=689069: Tue Oct 8 18:43:28 2024 00:35:36.984 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(351MiB/10006msec) 00:35:36.984 slat (nsec): min=6877, max=64216, avg=22043.00, stdev=9041.26 00:35:36.984 clat (usec): min=5402, max=15034, avg=10657.70, stdev=845.15 00:35:36.984 lat (usec): min=5417, max=15047, avg=10679.75, stdev=844.37 00:35:36.984 clat percentiles (usec): 00:35:36.984 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:35:36.984 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:35:36.984 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:35:36.984 | 99.00th=[12911], 99.50th=[13304], 99.90th=[14222], 99.95th=[14615], 00:35:36.984 | 99.99th=[15008] 00:35:36.984 bw ( KiB/s): min=32000, max=37632, per=32.48%, avg=35920.84, stdev=1330.36, samples=19 00:35:36.984 iops : min= 250, max= 294, avg=280.63, stdev=10.39, samples=19 00:35:36.984 lat (msec) : 10=20.53%, 20=79.47% 00:35:36.984 cpu : usr=91.67%, sys=5.30%, ctx=701, majf=0, minf=63 00:35:36.984 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.984 issued rwts: total=2810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.984 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:36.984 filename0: (groupid=0, jobs=1): err= 0: pid=689070: Tue Oct 8 18:43:28 2024 00:35:36.984 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(365MiB/10046msec) 00:35:36.984 slat (nsec): min=6536, max=74152, avg=16784.41, stdev=6573.63 00:35:36.984 clat (usec): min=7594, max=47168, avg=10298.89, stdev=1232.50 00:35:36.984 lat (usec): min=7616, max=47181, avg=10315.67, stdev=1232.17 00:35:36.984 clat percentiles (usec): 00:35:36.984 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:35:36.984 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:35:36.984 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:35:36.984 | 99.00th=[12649], 99.50th=[12911], 99.90th=[14746], 99.95th=[45351], 00:35:36.984 | 99.99th=[46924] 00:35:36.984 bw ( KiB/s): min=34048, max=38912, per=33.74%, avg=37312.00, stdev=1095.21, samples=20 00:35:36.984 iops : min= 266, max= 304, avg=291.50, stdev= 8.56, samples=20 00:35:36.984 lat (msec) : 10=37.09%, 20=62.84%, 50=0.07% 00:35:36.984 cpu : usr=97.21%, sys=2.44%, ctx=17, majf=0, minf=54 00:35:36.984 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.984 issued rwts: total=2917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.984 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:36.984 00:35:36.984 Run status group 0 (all jobs): 00:35:36.984 READ: bw=108MiB/s (113MB/s), 35.1MiB/s-36.8MiB/s (36.8MB/s-38.5MB/s), io=1085MiB (1138MB), run=10006-10047msec 00:35:36.985 18:43:28 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:36.985 18:43:28 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:36.985 18:43:28 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:36.985 18:43:28 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:36.985 18:43:28 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:36.985 18:43:28 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:36.985 18:43:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.985 18:43:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:36.985 18:43:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.985 18:43:28 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:36.985 18:43:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.985 18:43:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:36.985 18:43:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.985 00:35:36.985 real 0m11.307s 00:35:36.985 user 0m35.687s 00:35:36.985 sys 0m1.515s 00:35:36.985 18:43:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:36.985 18:43:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:36.985 ************************************ 00:35:36.985 END TEST fio_dif_digest 00:35:36.985 ************************************ 00:35:36.985 18:43:28 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:36.985 18:43:28 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:36.985 18:43:28 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:36.985 18:43:28 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:36.985 18:43:28 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:36.985 18:43:28 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:36.985 18:43:28 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:36.985 18:43:28 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:36.985 rmmod nvme_tcp 00:35:36.985 rmmod nvme_fabrics 00:35:36.985 rmmod nvme_keyring 00:35:36.985 18:43:28 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:36.985 18:43:29 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:36.985 18:43:29 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:36.985 18:43:29 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 680447 ']' 00:35:36.985 18:43:29 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 680447 00:35:36.985 18:43:29 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 680447 ']' 00:35:36.985 18:43:29 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 680447 00:35:36.985 18:43:29 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:35:36.985 18:43:29 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:36.985 18:43:29 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 680447 00:35:36.985 18:43:29 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:36.985 18:43:29 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:36.985 18:43:29 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 680447' 00:35:36.985 killing process with pid 680447 00:35:36.985 18:43:29 nvmf_dif -- common/autotest_common.sh@969 -- # kill 680447 00:35:36.985 18:43:29 nvmf_dif -- common/autotest_common.sh@974 -- # wait 680447 00:35:36.985 18:43:29 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:35:36.985 18:43:29 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:38.888 Waiting for block devices as requested 00:35:38.888 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:38.888 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:38.888 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:39.147 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:39.147 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:39.147 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:39.406 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:39.406 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:39.406 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:39.406 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:39.665 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:39.665 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:39.665 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:39.923 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:39.923 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:39.923 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:40.182 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:40.182 18:43:33 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:40.182 18:43:33 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:40.182 18:43:33 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:40.182 18:43:33 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:35:40.182 18:43:33 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:40.182 18:43:33 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:35:40.182 18:43:33 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:40.182 18:43:33 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:40.182 18:43:33 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:40.182 18:43:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:40.182 18:43:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:42.714 18:43:35 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:42.714 00:35:42.714 real 1m15.119s 00:35:42.714 user 7m12.773s 00:35:42.714 sys 0m20.009s 00:35:42.714 18:43:35 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:42.714 18:43:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:42.714 ************************************ 00:35:42.714 END TEST nvmf_dif 00:35:42.714 ************************************ 00:35:42.714 18:43:35 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:42.714 18:43:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:42.714 18:43:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:42.714 18:43:35 -- common/autotest_common.sh@10 -- # set +x 00:35:42.714 ************************************ 00:35:42.714 START TEST nvmf_abort_qd_sizes 00:35:42.714 ************************************ 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:42.714 * Looking for test storage... 00:35:42.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:42.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.714 --rc genhtml_branch_coverage=1 00:35:42.714 --rc genhtml_function_coverage=1 00:35:42.714 --rc genhtml_legend=1 00:35:42.714 --rc geninfo_all_blocks=1 00:35:42.714 --rc geninfo_unexecuted_blocks=1 00:35:42.714 00:35:42.714 ' 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:42.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.714 --rc genhtml_branch_coverage=1 00:35:42.714 --rc genhtml_function_coverage=1 00:35:42.714 --rc genhtml_legend=1 00:35:42.714 --rc geninfo_all_blocks=1 00:35:42.714 --rc geninfo_unexecuted_blocks=1 00:35:42.714 00:35:42.714 ' 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:42.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.714 --rc genhtml_branch_coverage=1 00:35:42.714 --rc genhtml_function_coverage=1 00:35:42.714 --rc genhtml_legend=1 00:35:42.714 --rc geninfo_all_blocks=1 00:35:42.714 --rc geninfo_unexecuted_blocks=1 00:35:42.714 00:35:42.714 ' 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:42.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.714 --rc genhtml_branch_coverage=1 00:35:42.714 --rc genhtml_function_coverage=1 00:35:42.714 --rc genhtml_legend=1 00:35:42.714 --rc geninfo_all_blocks=1 00:35:42.714 --rc geninfo_unexecuted_blocks=1 00:35:42.714 00:35:42.714 ' 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.714 18:43:35 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:42.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:42.715 18:43:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:47.993 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:47.993 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:47.993 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:47.993 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:47.993 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:47.993 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:47.993 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:47.993 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:47.993 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:47.993 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:47.993 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:47.993 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:47.993 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:47.993 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:47.993 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:47.994 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:47.994 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:47.994 Found net devices under 0000:86:00.0: cvl_0_0 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:47.994 Found net devices under 0000:86:00.1: cvl_0_1 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:47.994 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:48.254 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:48.254 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:48.254 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:48.254 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:48.254 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:48.254 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:48.254 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:48.254 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:48.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:48.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:35:48.254 00:35:48.254 --- 10.0.0.2 ping statistics --- 00:35:48.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:48.254 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:35:48.254 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:48.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:48.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:35:48.254 00:35:48.254 --- 10.0.0.1 ping statistics --- 00:35:48.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:48.254 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:35:48.254 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:48.254 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:35:48.254 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:35:48.254 18:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:51.544 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:51.544 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:51.544 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:51.544 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:51.544 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:51.544 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:51.544 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:51.544 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:51.544 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:51.544 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:51.544 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:51.544 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:51.544 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:51.544 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:51.544 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:51.544 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:52.481 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:52.739 18:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:52.739 18:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:52.739 18:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:52.739 18:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:52.739 18:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:52.739 18:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:52.739 18:43:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:52.739 18:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:52.739 18:43:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:52.739 18:43:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:52.739 18:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=697083 00:35:52.739 18:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:52.739 18:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 697083 00:35:52.739 18:43:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 697083 ']' 00:35:52.739 18:43:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:52.739 18:43:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:52.740 18:43:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:52.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:52.740 18:43:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:52.740 18:43:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:52.740 [2024-10-08 18:43:45.967499] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:35:52.740 [2024-10-08 18:43:45.967540] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:52.740 [2024-10-08 18:43:46.040922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:52.998 [2024-10-08 18:43:46.118524] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:52.998 [2024-10-08 18:43:46.118562] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:52.998 [2024-10-08 18:43:46.118569] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:52.998 [2024-10-08 18:43:46.118575] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:52.998 [2024-10-08 18:43:46.118580] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:52.998 [2024-10-08 18:43:46.120215] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:52.998 [2024-10-08 18:43:46.120321] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:35:52.998 [2024-10-08 18:43:46.120430] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:52.998 [2024-10-08 18:43:46.120430] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:35:53.565 18:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:53.565 18:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:35:53.565 18:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:53.565 18:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:53.565 18:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:53.565 18:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:53.565 18:43:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:53.565 18:43:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:53.565 18:43:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:53.565 18:43:46 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:53.565 18:43:46 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:53.566 18:43:46 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:35:53.566 18:43:46 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:53.566 18:43:46 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:53.566 18:43:46 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:53.566 18:43:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:53.566 18:43:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:53.566 18:43:46 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:53.566 18:43:46 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:53.566 18:43:46 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:53.566 18:43:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:53.566 18:43:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:53.566 18:43:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:53.566 18:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:53.566 18:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:53.566 18:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:53.825 ************************************ 00:35:53.825 START TEST spdk_target_abort 00:35:53.825 ************************************ 00:35:53.825 18:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:35:53.825 18:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:53.825 18:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:53.825 18:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.825 18:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:57.109 spdk_targetn1 00:35:57.109 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.109 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:57.109 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.109 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:57.109 [2024-10-08 18:43:49.722552] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:57.109 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.109 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:57.109 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:57.110 [2024-10-08 18:43:49.751530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:57.110 18:43:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:00.396 Initializing NVMe Controllers 00:36:00.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:00.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:00.396 Initialization complete. Launching workers. 00:36:00.396 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17939, failed: 0 00:36:00.397 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1337, failed to submit 16602 00:36:00.397 success 794, unsuccessful 543, failed 0 00:36:00.397 18:43:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:00.397 18:43:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:03.681 Initializing NVMe Controllers 00:36:03.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:03.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:03.681 Initialization complete. Launching workers. 00:36:03.681 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8518, failed: 0 00:36:03.681 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1239, failed to submit 7279 00:36:03.681 success 290, unsuccessful 949, failed 0 00:36:03.681 18:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:03.681 18:43:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:06.965 Initializing NVMe Controllers 00:36:06.965 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:06.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:06.965 Initialization complete. Launching workers. 00:36:06.965 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38501, failed: 0 00:36:06.965 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2803, failed to submit 35698 00:36:06.965 success 602, unsuccessful 2201, failed 0 00:36:06.965 18:43:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:06.965 18:43:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.965 18:43:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:06.965 18:43:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.965 18:43:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:06.965 18:43:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.965 18:43:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:08.341 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.341 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 697083 00:36:08.341 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 697083 ']' 00:36:08.341 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 697083 00:36:08.341 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:36:08.341 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:08.341 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 697083 00:36:08.341 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:08.341 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:08.341 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 697083' 00:36:08.341 killing process with pid 697083 00:36:08.341 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 697083 00:36:08.341 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 697083 00:36:08.599 00:36:08.599 real 0m14.783s 00:36:08.599 user 0m58.689s 00:36:08.599 sys 0m2.659s 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:08.599 ************************************ 00:36:08.599 END TEST spdk_target_abort 00:36:08.599 ************************************ 00:36:08.599 18:44:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:08.599 18:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:08.599 18:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:08.599 18:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:08.599 ************************************ 00:36:08.599 START TEST kernel_target_abort 00:36:08.599 ************************************ 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:08.599 18:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:11.132 Waiting for block devices as requested 00:36:11.391 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:11.391 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:11.649 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:11.649 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:11.649 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:11.649 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:11.907 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:11.907 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:11.907 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:12.165 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:12.165 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:12.165 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:12.165 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:12.423 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:12.423 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:12.423 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:12.423 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:12.682 No valid GPT data, bailing 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:12.682 00:36:12.682 Discovery Log Number of Records 2, Generation counter 2 00:36:12.682 =====Discovery Log Entry 0====== 00:36:12.682 trtype: tcp 00:36:12.682 adrfam: ipv4 00:36:12.682 subtype: current discovery subsystem 00:36:12.682 treq: not specified, sq flow control disable supported 00:36:12.682 portid: 1 00:36:12.682 trsvcid: 4420 00:36:12.682 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:12.682 traddr: 10.0.0.1 00:36:12.682 eflags: none 00:36:12.682 sectype: none 00:36:12.682 =====Discovery Log Entry 1====== 00:36:12.682 trtype: tcp 00:36:12.682 adrfam: ipv4 00:36:12.682 subtype: nvme subsystem 00:36:12.682 treq: not specified, sq flow control disable supported 00:36:12.682 portid: 1 00:36:12.682 trsvcid: 4420 00:36:12.682 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:12.682 traddr: 10.0.0.1 00:36:12.682 eflags: none 00:36:12.682 sectype: none 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:12.682 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:12.683 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:12.683 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:12.683 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:12.683 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:12.683 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:12.683 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:12.683 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:12.683 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:12.683 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:12.683 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:12.683 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:12.683 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:12.683 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:12.683 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:12.683 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:12.683 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:12.683 18:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:15.968 Initializing NVMe Controllers 00:36:15.968 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:15.968 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:15.968 Initialization complete. Launching workers. 00:36:15.968 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96023, failed: 0 00:36:15.968 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 96023, failed to submit 0 00:36:15.968 success 0, unsuccessful 96023, failed 0 00:36:15.968 18:44:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:15.968 18:44:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:19.249 Initializing NVMe Controllers 00:36:19.249 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:19.249 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:19.249 Initialization complete. Launching workers. 00:36:19.249 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 150782, failed: 0 00:36:19.249 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38042, failed to submit 112740 00:36:19.249 success 0, unsuccessful 38042, failed 0 00:36:19.249 18:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:19.249 18:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:22.533 Initializing NVMe Controllers 00:36:22.533 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:22.533 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:22.533 Initialization complete. Launching workers. 00:36:22.533 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142253, failed: 0 00:36:22.533 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35618, failed to submit 106635 00:36:22.533 success 0, unsuccessful 35618, failed 0 00:36:22.533 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:22.533 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:22.533 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:36:22.533 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:22.533 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:22.533 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:22.533 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:22.533 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:36:22.533 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:36:22.533 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:25.177 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:25.177 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:25.177 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:25.177 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:25.177 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:25.177 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:25.177 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:25.177 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:25.177 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:25.177 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:25.177 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:25.177 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:25.177 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:25.177 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:25.177 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:25.177 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:26.553 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:26.553 00:36:26.553 real 0m17.940s 00:36:26.553 user 0m9.189s 00:36:26.553 sys 0m4.985s 00:36:26.553 18:44:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:26.553 18:44:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:26.553 ************************************ 00:36:26.553 END TEST kernel_target_abort 00:36:26.553 ************************************ 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:26.554 rmmod nvme_tcp 00:36:26.554 rmmod nvme_fabrics 00:36:26.554 rmmod nvme_keyring 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 697083 ']' 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 697083 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 697083 ']' 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 697083 00:36:26.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (697083) - No such process 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 697083 is not found' 00:36:26.554 Process with pid 697083 is not found 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:36:26.554 18:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:29.839 Waiting for block devices as requested 00:36:29.839 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:29.839 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:29.839 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:29.839 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:29.839 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:29.839 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:29.839 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:29.839 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:30.098 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:30.098 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:30.098 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:30.098 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:30.358 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:30.358 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:30.358 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:30.617 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:30.617 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:30.617 18:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:30.617 18:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:30.617 18:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:30.617 18:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:36:30.617 18:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:30.617 18:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:36:30.617 18:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:30.617 18:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:30.617 18:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:30.617 18:44:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:30.617 18:44:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.151 18:44:25 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:33.151 00:36:33.151 real 0m50.457s 00:36:33.151 user 1m12.441s 00:36:33.151 sys 0m16.301s 00:36:33.151 18:44:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:33.151 18:44:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:33.151 ************************************ 00:36:33.151 END TEST nvmf_abort_qd_sizes 00:36:33.151 ************************************ 00:36:33.151 18:44:25 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:33.151 18:44:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:33.151 18:44:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:33.151 18:44:26 -- common/autotest_common.sh@10 -- # set +x 00:36:33.151 ************************************ 00:36:33.151 START TEST keyring_file 00:36:33.151 ************************************ 00:36:33.151 18:44:26 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:33.151 * Looking for test storage... 00:36:33.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:33.151 18:44:26 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:33.151 18:44:26 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:36:33.151 18:44:26 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:33.151 18:44:26 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:33.151 18:44:26 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:33.151 18:44:26 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:33.151 18:44:26 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:33.151 18:44:26 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:33.151 18:44:26 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:33.151 18:44:26 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:33.151 18:44:26 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:33.151 18:44:26 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:33.151 18:44:26 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:33.152 18:44:26 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:33.152 18:44:26 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:33.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.152 --rc genhtml_branch_coverage=1 00:36:33.152 --rc genhtml_function_coverage=1 00:36:33.152 --rc genhtml_legend=1 00:36:33.152 --rc geninfo_all_blocks=1 00:36:33.152 --rc geninfo_unexecuted_blocks=1 00:36:33.152 00:36:33.152 ' 00:36:33.152 18:44:26 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:33.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.152 --rc genhtml_branch_coverage=1 00:36:33.152 --rc genhtml_function_coverage=1 00:36:33.152 --rc genhtml_legend=1 00:36:33.152 --rc geninfo_all_blocks=1 00:36:33.152 --rc geninfo_unexecuted_blocks=1 00:36:33.152 00:36:33.152 ' 00:36:33.152 18:44:26 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:33.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.152 --rc genhtml_branch_coverage=1 00:36:33.152 --rc genhtml_function_coverage=1 00:36:33.152 --rc genhtml_legend=1 00:36:33.152 --rc geninfo_all_blocks=1 00:36:33.152 --rc geninfo_unexecuted_blocks=1 00:36:33.152 00:36:33.152 ' 00:36:33.152 18:44:26 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:33.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.152 --rc genhtml_branch_coverage=1 00:36:33.152 --rc genhtml_function_coverage=1 00:36:33.152 --rc genhtml_legend=1 00:36:33.152 --rc geninfo_all_blocks=1 00:36:33.152 --rc geninfo_unexecuted_blocks=1 00:36:33.152 00:36:33.152 ' 00:36:33.152 18:44:26 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:33.152 18:44:26 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:33.152 18:44:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.152 18:44:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.152 18:44:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.152 18:44:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:33.152 18:44:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:33.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:33.152 18:44:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:33.152 18:44:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:33.152 18:44:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:33.152 18:44:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:33.152 18:44:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:33.152 18:44:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3CD5wzUFq3 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@731 -- # python - 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3CD5wzUFq3 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3CD5wzUFq3 00:36:33.152 18:44:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.3CD5wzUFq3 00:36:33.152 18:44:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.RjN9YrzcRi 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:36:33.152 18:44:26 keyring_file -- nvmf/common.sh@731 -- # python - 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.RjN9YrzcRi 00:36:33.152 18:44:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.RjN9YrzcRi 00:36:33.152 18:44:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.RjN9YrzcRi 00:36:33.152 18:44:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=706097 00:36:33.152 18:44:26 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:33.152 18:44:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 706097 00:36:33.152 18:44:26 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 706097 ']' 00:36:33.152 18:44:26 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:33.152 18:44:26 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:33.152 18:44:26 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:33.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:33.153 18:44:26 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:33.153 18:44:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:33.153 [2024-10-08 18:44:26.409069] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:36:33.153 [2024-10-08 18:44:26.409117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706097 ] 00:36:33.412 [2024-10-08 18:44:26.477520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:33.412 [2024-10-08 18:44:26.555165] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:33.980 18:44:27 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:33.980 18:44:27 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:36:33.980 18:44:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:33.980 18:44:27 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.980 18:44:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:33.980 [2024-10-08 18:44:27.238437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:33.980 null0 00:36:33.980 [2024-10-08 18:44:27.270487] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:33.981 [2024-10-08 18:44:27.270846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:33.981 18:44:27 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.981 18:44:27 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:33.981 18:44:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:33.981 18:44:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:33.981 18:44:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:33.981 18:44:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:33.981 18:44:27 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:33.981 18:44:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:33.981 18:44:27 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:33.981 18:44:27 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.981 18:44:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:33.981 [2024-10-08 18:44:27.298542] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:34.240 request: 00:36:34.240 { 00:36:34.240 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:34.240 "secure_channel": false, 00:36:34.240 "listen_address": { 00:36:34.240 "trtype": "tcp", 00:36:34.240 "traddr": "127.0.0.1", 00:36:34.240 "trsvcid": "4420" 00:36:34.240 }, 00:36:34.240 "method": "nvmf_subsystem_add_listener", 00:36:34.240 "req_id": 1 00:36:34.240 } 00:36:34.240 Got JSON-RPC error response 00:36:34.240 response: 00:36:34.240 { 00:36:34.240 "code": -32602, 00:36:34.240 "message": "Invalid parameters" 00:36:34.240 } 00:36:34.240 18:44:27 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:34.240 18:44:27 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:34.240 18:44:27 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:34.240 18:44:27 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:34.240 18:44:27 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:34.240 18:44:27 keyring_file -- keyring/file.sh@47 -- # bperfpid=706274 00:36:34.240 18:44:27 keyring_file -- keyring/file.sh@49 -- # waitforlisten 706274 /var/tmp/bperf.sock 00:36:34.240 18:44:27 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:34.240 18:44:27 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 706274 ']' 00:36:34.240 18:44:27 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:34.240 18:44:27 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:34.240 18:44:27 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:34.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:34.240 18:44:27 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:34.240 18:44:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:34.240 [2024-10-08 18:44:27.352465] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:36:34.240 [2024-10-08 18:44:27.352508] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706274 ] 00:36:34.240 [2024-10-08 18:44:27.420001] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.240 [2024-10-08 18:44:27.499845] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:35.178 18:44:28 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:35.178 18:44:28 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:36:35.178 18:44:28 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3CD5wzUFq3 00:36:35.178 18:44:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3CD5wzUFq3 00:36:35.178 18:44:28 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.RjN9YrzcRi 00:36:35.178 18:44:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.RjN9YrzcRi 00:36:35.438 18:44:28 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:35.438 18:44:28 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:35.438 18:44:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.438 18:44:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:35.438 18:44:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.696 18:44:28 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.3CD5wzUFq3 == \/\t\m\p\/\t\m\p\.\3\C\D\5\w\z\U\F\q\3 ]] 00:36:35.696 18:44:28 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:35.696 18:44:28 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:35.696 18:44:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.696 18:44:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:35.696 18:44:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.696 18:44:28 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.RjN9YrzcRi == \/\t\m\p\/\t\m\p\.\R\j\N\9\Y\r\z\c\R\i ]] 00:36:35.696 18:44:28 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:35.696 18:44:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:35.696 18:44:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:35.696 18:44:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.696 18:44:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:35.696 18:44:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.955 18:44:29 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:35.955 18:44:29 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:35.955 18:44:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:35.955 18:44:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:35.955 18:44:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.955 18:44:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:35.955 18:44:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.213 18:44:29 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:36.213 18:44:29 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:36.213 18:44:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:36.213 [2024-10-08 18:44:29.530889] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:36.471 nvme0n1 00:36:36.471 18:44:29 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:36.471 18:44:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:36.471 18:44:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:36.471 18:44:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:36.471 18:44:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:36.471 18:44:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.731 18:44:29 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:36.731 18:44:29 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:36.731 18:44:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:36.731 18:44:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:36.731 18:44:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:36.731 18:44:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:36.731 18:44:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.731 18:44:30 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:36.731 18:44:30 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:36.993 Running I/O for 1 seconds... 00:36:37.929 19340.00 IOPS, 75.55 MiB/s 00:36:37.929 Latency(us) 00:36:37.929 [2024-10-08T16:44:31.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:37.929 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:37.929 nvme0n1 : 1.00 19384.33 75.72 0.00 0.00 6591.58 2683.86 11172.33 00:36:37.929 [2024-10-08T16:44:31.252Z] =================================================================================================================== 00:36:37.929 [2024-10-08T16:44:31.252Z] Total : 19384.33 75.72 0.00 0.00 6591.58 2683.86 11172.33 00:36:37.929 { 00:36:37.929 "results": [ 00:36:37.929 { 00:36:37.929 "job": "nvme0n1", 00:36:37.929 "core_mask": "0x2", 00:36:37.929 "workload": "randrw", 00:36:37.929 "percentage": 50, 00:36:37.929 "status": "finished", 00:36:37.929 "queue_depth": 128, 00:36:37.929 "io_size": 4096, 00:36:37.929 "runtime": 1.004368, 00:36:37.929 "iops": 19384.329249836712, 00:36:37.929 "mibps": 75.72003613217466, 00:36:37.929 "io_failed": 0, 00:36:37.929 "io_timeout": 0, 00:36:37.929 "avg_latency_us": 6591.5756281659, 00:36:37.929 "min_latency_us": 2683.8552380952383, 00:36:37.929 "max_latency_us": 11172.327619047619 00:36:37.929 } 00:36:37.929 ], 00:36:37.929 "core_count": 1 00:36:37.929 } 00:36:37.929 18:44:31 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:37.929 18:44:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:38.187 18:44:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:38.187 18:44:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:38.187 18:44:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:38.187 18:44:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:38.187 18:44:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:38.187 18:44:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.446 18:44:31 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:38.446 18:44:31 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:38.446 18:44:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:38.446 18:44:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:38.446 18:44:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:38.446 18:44:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:38.446 18:44:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.446 18:44:31 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:38.446 18:44:31 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:38.446 18:44:31 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:38.446 18:44:31 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:38.446 18:44:31 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:38.446 18:44:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:38.446 18:44:31 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:38.446 18:44:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:38.446 18:44:31 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:38.446 18:44:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:38.705 [2024-10-08 18:44:31.934480] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:38.705 [2024-10-08 18:44:31.935205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ec2a0 (107): Transport endpoint is not connected 00:36:38.705 [2024-10-08 18:44:31.936199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ec2a0 (9): Bad file descriptor 00:36:38.705 [2024-10-08 18:44:31.937200] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:38.705 [2024-10-08 18:44:31.937209] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:38.705 [2024-10-08 18:44:31.937217] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:38.705 [2024-10-08 18:44:31.937225] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:38.705 request: 00:36:38.705 { 00:36:38.705 "name": "nvme0", 00:36:38.705 "trtype": "tcp", 00:36:38.705 "traddr": "127.0.0.1", 00:36:38.705 "adrfam": "ipv4", 00:36:38.705 "trsvcid": "4420", 00:36:38.705 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:38.705 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:38.705 "prchk_reftag": false, 00:36:38.705 "prchk_guard": false, 00:36:38.705 "hdgst": false, 00:36:38.705 "ddgst": false, 00:36:38.705 "psk": "key1", 00:36:38.705 "allow_unrecognized_csi": false, 00:36:38.705 "method": "bdev_nvme_attach_controller", 00:36:38.705 "req_id": 1 00:36:38.705 } 00:36:38.705 Got JSON-RPC error response 00:36:38.705 response: 00:36:38.705 { 00:36:38.705 "code": -5, 00:36:38.705 "message": "Input/output error" 00:36:38.705 } 00:36:38.705 18:44:31 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:38.705 18:44:31 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:38.705 18:44:31 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:38.705 18:44:31 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:38.705 18:44:31 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:38.705 18:44:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:38.705 18:44:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:38.705 18:44:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:38.705 18:44:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:38.705 18:44:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.965 18:44:32 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:38.965 18:44:32 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:38.965 18:44:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:38.965 18:44:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:38.965 18:44:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:38.965 18:44:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.965 18:44:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:39.224 18:44:32 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:39.224 18:44:32 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:39.224 18:44:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:39.483 18:44:32 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:39.483 18:44:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:39.483 18:44:32 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:39.483 18:44:32 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:39.483 18:44:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:39.742 18:44:32 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:39.742 18:44:32 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.3CD5wzUFq3 00:36:39.742 18:44:32 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.3CD5wzUFq3 00:36:39.742 18:44:32 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:39.742 18:44:32 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.3CD5wzUFq3 00:36:39.742 18:44:32 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:39.742 18:44:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:39.742 18:44:32 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:39.742 18:44:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:39.742 18:44:32 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3CD5wzUFq3 00:36:39.742 18:44:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3CD5wzUFq3 00:36:40.001 [2024-10-08 18:44:33.141335] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.3CD5wzUFq3': 0100660 00:36:40.001 [2024-10-08 18:44:33.141364] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:40.001 request: 00:36:40.001 { 00:36:40.001 "name": "key0", 00:36:40.001 "path": "/tmp/tmp.3CD5wzUFq3", 00:36:40.001 "method": "keyring_file_add_key", 00:36:40.001 "req_id": 1 00:36:40.001 } 00:36:40.001 Got JSON-RPC error response 00:36:40.001 response: 00:36:40.001 { 00:36:40.001 "code": -1, 00:36:40.001 "message": "Operation not permitted" 00:36:40.001 } 00:36:40.001 18:44:33 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:40.001 18:44:33 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:40.001 18:44:33 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:40.001 18:44:33 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:40.001 18:44:33 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.3CD5wzUFq3 00:36:40.001 18:44:33 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3CD5wzUFq3 00:36:40.001 18:44:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3CD5wzUFq3 00:36:40.260 18:44:33 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.3CD5wzUFq3 00:36:40.260 18:44:33 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:40.260 18:44:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:40.260 18:44:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:40.260 18:44:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:40.260 18:44:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:40.260 18:44:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:40.260 18:44:33 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:40.260 18:44:33 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:40.260 18:44:33 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:40.260 18:44:33 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:40.260 18:44:33 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:40.260 18:44:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:40.260 18:44:33 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:40.260 18:44:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:40.260 18:44:33 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:40.260 18:44:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:40.519 [2024-10-08 18:44:33.718870] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.3CD5wzUFq3': No such file or directory 00:36:40.519 [2024-10-08 18:44:33.718892] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:40.519 [2024-10-08 18:44:33.718908] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:40.519 [2024-10-08 18:44:33.718915] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:40.519 [2024-10-08 18:44:33.718923] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:40.519 [2024-10-08 18:44:33.718929] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:40.519 request: 00:36:40.519 { 00:36:40.519 "name": "nvme0", 00:36:40.519 "trtype": "tcp", 00:36:40.519 "traddr": "127.0.0.1", 00:36:40.519 "adrfam": "ipv4", 00:36:40.519 "trsvcid": "4420", 00:36:40.519 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:40.519 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:40.519 "prchk_reftag": false, 00:36:40.519 "prchk_guard": false, 00:36:40.519 "hdgst": false, 00:36:40.519 "ddgst": false, 00:36:40.519 "psk": "key0", 00:36:40.519 "allow_unrecognized_csi": false, 00:36:40.520 "method": "bdev_nvme_attach_controller", 00:36:40.520 "req_id": 1 00:36:40.520 } 00:36:40.520 Got JSON-RPC error response 00:36:40.520 response: 00:36:40.520 { 00:36:40.520 "code": -19, 00:36:40.520 "message": "No such device" 00:36:40.520 } 00:36:40.520 18:44:33 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:40.520 18:44:33 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:40.520 18:44:33 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:40.520 18:44:33 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:40.520 18:44:33 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:40.520 18:44:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:40.778 18:44:33 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:40.778 18:44:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:40.778 18:44:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:40.778 18:44:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:40.778 18:44:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:40.778 18:44:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:40.778 18:44:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lcezTmrLKD 00:36:40.778 18:44:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:40.778 18:44:33 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:40.778 18:44:33 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:36:40.778 18:44:33 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:36:40.778 18:44:33 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:36:40.778 18:44:33 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:36:40.778 18:44:33 keyring_file -- nvmf/common.sh@731 -- # python - 00:36:40.778 18:44:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lcezTmrLKD 00:36:40.778 18:44:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lcezTmrLKD 00:36:40.778 18:44:33 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.lcezTmrLKD 00:36:40.778 18:44:33 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lcezTmrLKD 00:36:40.778 18:44:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lcezTmrLKD 00:36:41.037 18:44:34 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:41.037 18:44:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:41.295 nvme0n1 00:36:41.295 18:44:34 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:41.295 18:44:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:41.295 18:44:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:41.295 18:44:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.295 18:44:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.295 18:44:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:41.554 18:44:34 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:41.554 18:44:34 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:41.554 18:44:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:41.554 18:44:34 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:41.554 18:44:34 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:41.554 18:44:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.554 18:44:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:41.554 18:44:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.813 18:44:35 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:41.813 18:44:35 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:41.813 18:44:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:41.813 18:44:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:41.813 18:44:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.813 18:44:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:41.813 18:44:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.072 18:44:35 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:42.072 18:44:35 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:42.072 18:44:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:42.331 18:44:35 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:42.331 18:44:35 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:42.331 18:44:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.589 18:44:35 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:42.589 18:44:35 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lcezTmrLKD 00:36:42.589 18:44:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lcezTmrLKD 00:36:42.589 18:44:35 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.RjN9YrzcRi 00:36:42.589 18:44:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.RjN9YrzcRi 00:36:42.848 18:44:36 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:42.848 18:44:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:43.107 nvme0n1 00:36:43.107 18:44:36 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:43.107 18:44:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:43.366 18:44:36 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:43.366 "subsystems": [ 00:36:43.366 { 00:36:43.366 "subsystem": "keyring", 00:36:43.366 "config": [ 00:36:43.366 { 00:36:43.366 "method": "keyring_file_add_key", 00:36:43.366 "params": { 00:36:43.366 "name": "key0", 00:36:43.366 "path": "/tmp/tmp.lcezTmrLKD" 00:36:43.366 } 00:36:43.366 }, 00:36:43.366 { 00:36:43.366 "method": "keyring_file_add_key", 00:36:43.366 "params": { 00:36:43.366 "name": "key1", 00:36:43.366 "path": "/tmp/tmp.RjN9YrzcRi" 00:36:43.366 } 00:36:43.366 } 00:36:43.366 ] 00:36:43.366 }, 00:36:43.366 { 00:36:43.366 "subsystem": "iobuf", 00:36:43.366 "config": [ 00:36:43.366 { 00:36:43.366 "method": "iobuf_set_options", 00:36:43.366 "params": { 00:36:43.366 "small_pool_count": 8192, 00:36:43.366 "large_pool_count": 1024, 00:36:43.366 "small_bufsize": 8192, 00:36:43.366 "large_bufsize": 135168 00:36:43.366 } 00:36:43.366 } 00:36:43.366 ] 00:36:43.366 }, 00:36:43.366 { 00:36:43.366 "subsystem": "sock", 00:36:43.366 "config": [ 00:36:43.366 { 00:36:43.366 "method": "sock_set_default_impl", 00:36:43.366 "params": { 00:36:43.366 "impl_name": "posix" 00:36:43.366 } 00:36:43.366 }, 00:36:43.366 { 00:36:43.366 "method": "sock_impl_set_options", 00:36:43.366 "params": { 00:36:43.366 "impl_name": "ssl", 00:36:43.366 "recv_buf_size": 4096, 00:36:43.366 "send_buf_size": 4096, 00:36:43.366 "enable_recv_pipe": true, 00:36:43.366 "enable_quickack": false, 00:36:43.366 "enable_placement_id": 0, 00:36:43.366 "enable_zerocopy_send_server": true, 00:36:43.366 "enable_zerocopy_send_client": false, 00:36:43.366 "zerocopy_threshold": 0, 00:36:43.366 "tls_version": 0, 00:36:43.366 "enable_ktls": false 00:36:43.366 } 00:36:43.366 }, 00:36:43.366 { 00:36:43.366 "method": "sock_impl_set_options", 00:36:43.366 "params": { 00:36:43.366 "impl_name": "posix", 00:36:43.366 "recv_buf_size": 2097152, 00:36:43.366 "send_buf_size": 2097152, 00:36:43.366 "enable_recv_pipe": true, 00:36:43.366 "enable_quickack": false, 00:36:43.366 "enable_placement_id": 0, 00:36:43.366 "enable_zerocopy_send_server": true, 00:36:43.366 "enable_zerocopy_send_client": false, 00:36:43.366 "zerocopy_threshold": 0, 00:36:43.366 "tls_version": 0, 00:36:43.366 "enable_ktls": false 00:36:43.366 } 00:36:43.366 } 00:36:43.366 ] 00:36:43.366 }, 00:36:43.366 { 00:36:43.366 "subsystem": "vmd", 00:36:43.366 "config": [] 00:36:43.366 }, 00:36:43.366 { 00:36:43.366 "subsystem": "accel", 00:36:43.366 "config": [ 00:36:43.366 { 00:36:43.366 "method": "accel_set_options", 00:36:43.366 "params": { 00:36:43.366 "small_cache_size": 128, 00:36:43.366 "large_cache_size": 16, 00:36:43.366 "task_count": 2048, 00:36:43.366 "sequence_count": 2048, 00:36:43.366 "buf_count": 2048 00:36:43.366 } 00:36:43.366 } 00:36:43.366 ] 00:36:43.366 }, 00:36:43.366 { 00:36:43.366 "subsystem": "bdev", 00:36:43.366 "config": [ 00:36:43.366 { 00:36:43.366 "method": "bdev_set_options", 00:36:43.366 "params": { 00:36:43.366 "bdev_io_pool_size": 65535, 00:36:43.366 "bdev_io_cache_size": 256, 00:36:43.366 "bdev_auto_examine": true, 00:36:43.366 "iobuf_small_cache_size": 128, 00:36:43.366 "iobuf_large_cache_size": 16 00:36:43.366 } 00:36:43.366 }, 00:36:43.366 { 00:36:43.366 "method": "bdev_raid_set_options", 00:36:43.366 "params": { 00:36:43.366 "process_window_size_kb": 1024, 00:36:43.366 "process_max_bandwidth_mb_sec": 0 00:36:43.366 } 00:36:43.366 }, 00:36:43.366 { 00:36:43.366 "method": "bdev_iscsi_set_options", 00:36:43.366 "params": { 00:36:43.366 "timeout_sec": 30 00:36:43.366 } 00:36:43.367 }, 00:36:43.367 { 00:36:43.367 "method": "bdev_nvme_set_options", 00:36:43.367 "params": { 00:36:43.367 "action_on_timeout": "none", 00:36:43.367 "timeout_us": 0, 00:36:43.367 "timeout_admin_us": 0, 00:36:43.367 "keep_alive_timeout_ms": 10000, 00:36:43.367 "arbitration_burst": 0, 00:36:43.367 "low_priority_weight": 0, 00:36:43.367 "medium_priority_weight": 0, 00:36:43.367 "high_priority_weight": 0, 00:36:43.367 "nvme_adminq_poll_period_us": 10000, 00:36:43.367 "nvme_ioq_poll_period_us": 0, 00:36:43.367 "io_queue_requests": 512, 00:36:43.367 "delay_cmd_submit": true, 00:36:43.367 "transport_retry_count": 4, 00:36:43.367 "bdev_retry_count": 3, 00:36:43.367 "transport_ack_timeout": 0, 00:36:43.367 "ctrlr_loss_timeout_sec": 0, 00:36:43.367 "reconnect_delay_sec": 0, 00:36:43.367 "fast_io_fail_timeout_sec": 0, 00:36:43.367 "disable_auto_failback": false, 00:36:43.367 "generate_uuids": false, 00:36:43.367 "transport_tos": 0, 00:36:43.367 "nvme_error_stat": false, 00:36:43.367 "rdma_srq_size": 0, 00:36:43.367 "io_path_stat": false, 00:36:43.367 "allow_accel_sequence": false, 00:36:43.367 "rdma_max_cq_size": 0, 00:36:43.367 "rdma_cm_event_timeout_ms": 0, 00:36:43.367 "dhchap_digests": [ 00:36:43.367 "sha256", 00:36:43.367 "sha384", 00:36:43.367 "sha512" 00:36:43.367 ], 00:36:43.367 "dhchap_dhgroups": [ 00:36:43.367 "null", 00:36:43.367 "ffdhe2048", 00:36:43.367 "ffdhe3072", 00:36:43.367 "ffdhe4096", 00:36:43.367 "ffdhe6144", 00:36:43.367 "ffdhe8192" 00:36:43.367 ] 00:36:43.367 } 00:36:43.367 }, 00:36:43.367 { 00:36:43.367 "method": "bdev_nvme_attach_controller", 00:36:43.367 "params": { 00:36:43.367 "name": "nvme0", 00:36:43.367 "trtype": "TCP", 00:36:43.367 "adrfam": "IPv4", 00:36:43.367 "traddr": "127.0.0.1", 00:36:43.367 "trsvcid": "4420", 00:36:43.367 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:43.367 "prchk_reftag": false, 00:36:43.367 "prchk_guard": false, 00:36:43.367 "ctrlr_loss_timeout_sec": 0, 00:36:43.367 "reconnect_delay_sec": 0, 00:36:43.367 "fast_io_fail_timeout_sec": 0, 00:36:43.367 "psk": "key0", 00:36:43.367 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:43.367 "hdgst": false, 00:36:43.367 "ddgst": false, 00:36:43.367 "multipath": "multipath" 00:36:43.367 } 00:36:43.367 }, 00:36:43.367 { 00:36:43.367 "method": "bdev_nvme_set_hotplug", 00:36:43.367 "params": { 00:36:43.367 "period_us": 100000, 00:36:43.367 "enable": false 00:36:43.367 } 00:36:43.367 }, 00:36:43.367 { 00:36:43.367 "method": "bdev_wait_for_examine" 00:36:43.367 } 00:36:43.367 ] 00:36:43.367 }, 00:36:43.367 { 00:36:43.367 "subsystem": "nbd", 00:36:43.367 "config": [] 00:36:43.367 } 00:36:43.367 ] 00:36:43.367 }' 00:36:43.367 18:44:36 keyring_file -- keyring/file.sh@115 -- # killprocess 706274 00:36:43.367 18:44:36 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 706274 ']' 00:36:43.367 18:44:36 keyring_file -- common/autotest_common.sh@954 -- # kill -0 706274 00:36:43.367 18:44:36 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:43.367 18:44:36 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:43.367 18:44:36 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 706274 00:36:43.367 18:44:36 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:43.367 18:44:36 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:43.367 18:44:36 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 706274' 00:36:43.367 killing process with pid 706274 00:36:43.367 18:44:36 keyring_file -- common/autotest_common.sh@969 -- # kill 706274 00:36:43.367 Received shutdown signal, test time was about 1.000000 seconds 00:36:43.367 00:36:43.367 Latency(us) 00:36:43.367 [2024-10-08T16:44:36.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:43.367 [2024-10-08T16:44:36.690Z] =================================================================================================================== 00:36:43.367 [2024-10-08T16:44:36.690Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:43.367 18:44:36 keyring_file -- common/autotest_common.sh@974 -- # wait 706274 00:36:43.626 18:44:36 keyring_file -- keyring/file.sh@118 -- # bperfpid=707850 00:36:43.626 18:44:36 keyring_file -- keyring/file.sh@120 -- # waitforlisten 707850 /var/tmp/bperf.sock 00:36:43.626 18:44:36 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 707850 ']' 00:36:43.626 18:44:36 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:43.626 18:44:36 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:43.626 18:44:36 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:43.626 18:44:36 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:43.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:43.626 18:44:36 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:43.626 "subsystems": [ 00:36:43.626 { 00:36:43.626 "subsystem": "keyring", 00:36:43.626 "config": [ 00:36:43.626 { 00:36:43.626 "method": "keyring_file_add_key", 00:36:43.626 "params": { 00:36:43.626 "name": "key0", 00:36:43.626 "path": "/tmp/tmp.lcezTmrLKD" 00:36:43.626 } 00:36:43.626 }, 00:36:43.626 { 00:36:43.626 "method": "keyring_file_add_key", 00:36:43.626 "params": { 00:36:43.626 "name": "key1", 00:36:43.626 "path": "/tmp/tmp.RjN9YrzcRi" 00:36:43.626 } 00:36:43.626 } 00:36:43.626 ] 00:36:43.626 }, 00:36:43.626 { 00:36:43.626 "subsystem": "iobuf", 00:36:43.626 "config": [ 00:36:43.626 { 00:36:43.626 "method": "iobuf_set_options", 00:36:43.626 "params": { 00:36:43.626 "small_pool_count": 8192, 00:36:43.626 "large_pool_count": 1024, 00:36:43.626 "small_bufsize": 8192, 00:36:43.626 "large_bufsize": 135168 00:36:43.626 } 00:36:43.626 } 00:36:43.626 ] 00:36:43.626 }, 00:36:43.626 { 00:36:43.626 "subsystem": "sock", 00:36:43.626 "config": [ 00:36:43.626 { 00:36:43.626 "method": "sock_set_default_impl", 00:36:43.626 "params": { 00:36:43.626 "impl_name": "posix" 00:36:43.626 } 00:36:43.626 }, 00:36:43.626 { 00:36:43.626 "method": "sock_impl_set_options", 00:36:43.626 "params": { 00:36:43.626 "impl_name": "ssl", 00:36:43.626 "recv_buf_size": 4096, 00:36:43.626 "send_buf_size": 4096, 00:36:43.626 "enable_recv_pipe": true, 00:36:43.626 "enable_quickack": false, 00:36:43.626 "enable_placement_id": 0, 00:36:43.626 "enable_zerocopy_send_server": true, 00:36:43.626 "enable_zerocopy_send_client": false, 00:36:43.626 "zerocopy_threshold": 0, 00:36:43.626 "tls_version": 0, 00:36:43.626 "enable_ktls": false 00:36:43.626 } 00:36:43.626 }, 00:36:43.626 { 00:36:43.626 "method": "sock_impl_set_options", 00:36:43.626 "params": { 00:36:43.626 "impl_name": "posix", 00:36:43.626 "recv_buf_size": 2097152, 00:36:43.626 "send_buf_size": 2097152, 00:36:43.626 "enable_recv_pipe": true, 00:36:43.626 "enable_quickack": false, 00:36:43.626 "enable_placement_id": 0, 00:36:43.626 "enable_zerocopy_send_server": true, 00:36:43.626 "enable_zerocopy_send_client": false, 00:36:43.626 "zerocopy_threshold": 0, 00:36:43.626 "tls_version": 0, 00:36:43.626 "enable_ktls": false 00:36:43.626 } 00:36:43.626 } 00:36:43.626 ] 00:36:43.626 }, 00:36:43.626 { 00:36:43.626 "subsystem": "vmd", 00:36:43.626 "config": [] 00:36:43.626 }, 00:36:43.626 { 00:36:43.626 "subsystem": "accel", 00:36:43.626 "config": [ 00:36:43.626 { 00:36:43.626 "method": "accel_set_options", 00:36:43.626 "params": { 00:36:43.626 "small_cache_size": 128, 00:36:43.626 "large_cache_size": 16, 00:36:43.626 "task_count": 2048, 00:36:43.626 "sequence_count": 2048, 00:36:43.626 "buf_count": 2048 00:36:43.626 } 00:36:43.626 } 00:36:43.626 ] 00:36:43.626 }, 00:36:43.626 { 00:36:43.626 "subsystem": "bdev", 00:36:43.626 "config": [ 00:36:43.626 { 00:36:43.626 "method": "bdev_set_options", 00:36:43.626 "params": { 00:36:43.626 "bdev_io_pool_size": 65535, 00:36:43.626 "bdev_io_cache_size": 256, 00:36:43.626 "bdev_auto_examine": true, 00:36:43.626 "iobuf_small_cache_size": 128, 00:36:43.626 "iobuf_large_cache_size": 16 00:36:43.626 } 00:36:43.626 }, 00:36:43.626 { 00:36:43.626 "method": "bdev_raid_set_options", 00:36:43.626 "params": { 00:36:43.626 "process_window_size_kb": 1024, 00:36:43.626 "process_max_bandwidth_mb_sec": 0 00:36:43.626 } 00:36:43.626 }, 00:36:43.626 { 00:36:43.626 "method": "bdev_iscsi_set_options", 00:36:43.626 "params": { 00:36:43.626 "timeout_sec": 30 00:36:43.626 } 00:36:43.626 }, 00:36:43.626 { 00:36:43.626 "method": "bdev_nvme_set_options", 00:36:43.626 "params": { 00:36:43.626 "action_on_timeout": "none", 00:36:43.626 "timeout_us": 0, 00:36:43.626 "timeout_admin_us": 0, 00:36:43.626 "keep_alive_timeout_ms": 10000, 00:36:43.626 "arbitration_burst": 0, 00:36:43.626 "low_priority_weight": 0, 00:36:43.626 "medium_priority_weight": 0, 00:36:43.626 "high_priority_weight": 0, 00:36:43.626 "nvme_adminq_poll_period_us": 10000, 00:36:43.626 "nvme_ioq_poll_period_us": 0, 00:36:43.626 "io_queue_requests": 512, 00:36:43.626 "delay_cmd_submit": true, 00:36:43.626 "transport_retry_count": 4, 00:36:43.626 "bdev_retry_count": 3, 00:36:43.626 "transport_ack_timeout": 0, 00:36:43.626 "ctrlr_loss_timeout_sec": 0, 00:36:43.627 "reconnect_delay_sec": 0, 00:36:43.627 "fast_io_fail_timeout_sec": 0, 00:36:43.627 "disable_auto_failback": false, 00:36:43.627 "generate_uuids": false, 00:36:43.627 "transport_tos": 0, 00:36:43.627 "nvme_error_stat": false, 00:36:43.627 "rdma_srq_size": 0, 00:36:43.627 "io_path_stat": false, 00:36:43.627 "allow_accel_sequence": false, 00:36:43.627 "rdma_max_cq_size": 0, 00:36:43.627 "rdma_cm_event_timeout_ms": 0, 00:36:43.627 "dhchap_digests": [ 00:36:43.627 "sha256", 00:36:43.627 "sha384", 00:36:43.627 "sha512" 00:36:43.627 ], 00:36:43.627 "dhchap_dhgroups": [ 00:36:43.627 "null", 00:36:43.627 "ffdhe2048", 00:36:43.627 "ffdhe3072", 00:36:43.627 "ffdhe4096", 00:36:43.627 "ffdhe6144", 00:36:43.627 "ffdhe8192" 00:36:43.627 ] 00:36:43.627 } 00:36:43.627 }, 00:36:43.627 { 00:36:43.627 "method": "bdev_nvme_attach_controller", 00:36:43.627 "params": { 00:36:43.627 "name": "nvme0", 00:36:43.627 "trtype": "TCP", 00:36:43.627 "adrfam": "IPv4", 00:36:43.627 "traddr": "127.0.0.1", 00:36:43.627 "trsvcid": "4420", 00:36:43.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:43.627 "prchk_reftag": false, 00:36:43.627 "prchk_guard": false, 00:36:43.627 "ctrlr_loss_timeout_sec": 0, 00:36:43.627 "reconnect_delay_sec": 0, 00:36:43.627 "fast_io_fail_timeout_sec": 0, 00:36:43.627 "psk": "key0", 00:36:43.627 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:43.627 "hdgst": false, 00:36:43.627 "ddgst": false, 00:36:43.627 "multipath": "multipath" 00:36:43.627 } 00:36:43.627 }, 00:36:43.627 { 00:36:43.627 "method": "bdev_nvme_set_hotplug", 00:36:43.627 "params": { 00:36:43.627 "period_us": 100000, 00:36:43.627 "enable": false 00:36:43.627 } 00:36:43.627 }, 00:36:43.627 { 00:36:43.627 "method": "bdev_wait_for_examine" 00:36:43.627 } 00:36:43.627 ] 00:36:43.627 }, 00:36:43.627 { 00:36:43.627 "subsystem": "nbd", 00:36:43.627 "config": [] 00:36:43.627 } 00:36:43.627 ] 00:36:43.627 }' 00:36:43.627 18:44:36 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:43.627 18:44:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:43.627 [2024-10-08 18:44:36.838485] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:36:43.627 [2024-10-08 18:44:36.838535] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid707850 ] 00:36:43.627 [2024-10-08 18:44:36.906761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.885 [2024-10-08 18:44:36.986155] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:43.885 [2024-10-08 18:44:37.145416] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:44.451 18:44:37 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:44.451 18:44:37 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:36:44.451 18:44:37 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:44.451 18:44:37 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:44.451 18:44:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.710 18:44:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:44.710 18:44:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:44.710 18:44:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:44.710 18:44:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:44.710 18:44:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.710 18:44:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:44.710 18:44:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.969 18:44:38 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:44.969 18:44:38 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:44.969 18:44:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:44.969 18:44:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:44.969 18:44:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.969 18:44:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.969 18:44:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:44.969 18:44:38 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:44.969 18:44:38 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:44.969 18:44:38 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:44.969 18:44:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:45.229 18:44:38 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:45.229 18:44:38 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:45.229 18:44:38 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.lcezTmrLKD /tmp/tmp.RjN9YrzcRi 00:36:45.229 18:44:38 keyring_file -- keyring/file.sh@20 -- # killprocess 707850 00:36:45.229 18:44:38 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 707850 ']' 00:36:45.229 18:44:38 keyring_file -- common/autotest_common.sh@954 -- # kill -0 707850 00:36:45.229 18:44:38 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:45.229 18:44:38 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:45.229 18:44:38 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 707850 00:36:45.229 18:44:38 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:45.229 18:44:38 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:45.229 18:44:38 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 707850' 00:36:45.229 killing process with pid 707850 00:36:45.229 18:44:38 keyring_file -- common/autotest_common.sh@969 -- # kill 707850 00:36:45.229 Received shutdown signal, test time was about 1.000000 seconds 00:36:45.229 00:36:45.229 Latency(us) 00:36:45.229 [2024-10-08T16:44:38.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:45.229 [2024-10-08T16:44:38.552Z] =================================================================================================================== 00:36:45.229 [2024-10-08T16:44:38.552Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:45.229 18:44:38 keyring_file -- common/autotest_common.sh@974 -- # wait 707850 00:36:45.491 18:44:38 keyring_file -- keyring/file.sh@21 -- # killprocess 706097 00:36:45.491 18:44:38 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 706097 ']' 00:36:45.491 18:44:38 keyring_file -- common/autotest_common.sh@954 -- # kill -0 706097 00:36:45.491 18:44:38 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:45.491 18:44:38 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:45.491 18:44:38 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 706097 00:36:45.491 18:44:38 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:45.491 18:44:38 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:45.491 18:44:38 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 706097' 00:36:45.491 killing process with pid 706097 00:36:45.491 18:44:38 keyring_file -- common/autotest_common.sh@969 -- # kill 706097 00:36:45.491 18:44:38 keyring_file -- common/autotest_common.sh@974 -- # wait 706097 00:36:46.057 00:36:46.057 real 0m13.048s 00:36:46.057 user 0m31.606s 00:36:46.057 sys 0m2.774s 00:36:46.057 18:44:39 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:46.057 18:44:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:46.057 ************************************ 00:36:46.057 END TEST keyring_file 00:36:46.057 ************************************ 00:36:46.057 18:44:39 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:36:46.057 18:44:39 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:46.057 18:44:39 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:46.057 18:44:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:46.057 18:44:39 -- common/autotest_common.sh@10 -- # set +x 00:36:46.057 ************************************ 00:36:46.057 START TEST keyring_linux 00:36:46.057 ************************************ 00:36:46.057 18:44:39 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:46.057 Joined session keyring: 719166296 00:36:46.057 * Looking for test storage... 00:36:46.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:46.057 18:44:39 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:46.057 18:44:39 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:36:46.057 18:44:39 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:46.057 18:44:39 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:46.057 18:44:39 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:46.057 18:44:39 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:46.057 18:44:39 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:46.057 18:44:39 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:46.057 18:44:39 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:46.057 18:44:39 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:46.057 18:44:39 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:46.057 18:44:39 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:46.057 18:44:39 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:46.057 18:44:39 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:46.057 18:44:39 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:46.057 18:44:39 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:46.057 18:44:39 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:46.057 18:44:39 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:46.058 18:44:39 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:46.058 18:44:39 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:46.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.058 --rc genhtml_branch_coverage=1 00:36:46.058 --rc genhtml_function_coverage=1 00:36:46.058 --rc genhtml_legend=1 00:36:46.058 --rc geninfo_all_blocks=1 00:36:46.058 --rc geninfo_unexecuted_blocks=1 00:36:46.058 00:36:46.058 ' 00:36:46.058 18:44:39 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:46.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.058 --rc genhtml_branch_coverage=1 00:36:46.058 --rc genhtml_function_coverage=1 00:36:46.058 --rc genhtml_legend=1 00:36:46.058 --rc geninfo_all_blocks=1 00:36:46.058 --rc geninfo_unexecuted_blocks=1 00:36:46.058 00:36:46.058 ' 00:36:46.058 18:44:39 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:46.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.058 --rc genhtml_branch_coverage=1 00:36:46.058 --rc genhtml_function_coverage=1 00:36:46.058 --rc genhtml_legend=1 00:36:46.058 --rc geninfo_all_blocks=1 00:36:46.058 --rc geninfo_unexecuted_blocks=1 00:36:46.058 00:36:46.058 ' 00:36:46.058 18:44:39 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:46.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.058 --rc genhtml_branch_coverage=1 00:36:46.058 --rc genhtml_function_coverage=1 00:36:46.058 --rc genhtml_legend=1 00:36:46.058 --rc geninfo_all_blocks=1 00:36:46.058 --rc geninfo_unexecuted_blocks=1 00:36:46.058 00:36:46.058 ' 00:36:46.058 18:44:39 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:46.058 18:44:39 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:46.058 18:44:39 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:46.058 18:44:39 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.058 18:44:39 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.058 18:44:39 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.058 18:44:39 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:46.058 18:44:39 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:46.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:46.058 18:44:39 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:46.058 18:44:39 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:46.058 18:44:39 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:46.058 18:44:39 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:46.058 18:44:39 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:46.058 18:44:39 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:46.058 18:44:39 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:46.058 18:44:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:46.058 18:44:39 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:46.058 18:44:39 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:46.058 18:44:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:46.058 18:44:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:46.058 18:44:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:36:46.058 18:44:39 keyring_linux -- nvmf/common.sh@731 -- # python - 00:36:46.317 18:44:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:46.317 18:44:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:46.317 /tmp/:spdk-test:key0 00:36:46.317 18:44:39 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:46.317 18:44:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:46.317 18:44:39 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:46.317 18:44:39 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:46.317 18:44:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:46.317 18:44:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:46.317 18:44:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:46.317 18:44:39 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:46.317 18:44:39 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:36:46.317 18:44:39 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:36:46.317 18:44:39 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:36:46.317 18:44:39 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:36:46.317 18:44:39 keyring_linux -- nvmf/common.sh@731 -- # python - 00:36:46.317 18:44:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:46.317 18:44:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:46.317 /tmp/:spdk-test:key1 00:36:46.317 18:44:39 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=708405 00:36:46.317 18:44:39 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 708405 00:36:46.317 18:44:39 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:46.317 18:44:39 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 708405 ']' 00:36:46.317 18:44:39 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:46.317 18:44:39 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:46.317 18:44:39 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:46.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:46.317 18:44:39 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:46.317 18:44:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:46.317 [2024-10-08 18:44:39.505666] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:36:46.317 [2024-10-08 18:44:39.505721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid708405 ] 00:36:46.317 [2024-10-08 18:44:39.572074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.576 [2024-10-08 18:44:39.650741] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:47.141 18:44:40 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:47.141 18:44:40 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:36:47.141 18:44:40 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:47.141 18:44:40 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.141 18:44:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:47.141 [2024-10-08 18:44:40.330303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:47.141 null0 00:36:47.141 [2024-10-08 18:44:40.362347] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:47.141 [2024-10-08 18:44:40.362718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:47.141 18:44:40 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.141 18:44:40 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:47.141 968164974 00:36:47.141 18:44:40 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:47.141 869495682 00:36:47.141 18:44:40 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=708534 00:36:47.141 18:44:40 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:47.141 18:44:40 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 708534 /var/tmp/bperf.sock 00:36:47.141 18:44:40 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 708534 ']' 00:36:47.141 18:44:40 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:47.141 18:44:40 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:47.141 18:44:40 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:47.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:47.141 18:44:40 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:47.141 18:44:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:47.141 [2024-10-08 18:44:40.431701] Starting SPDK v25.01-pre git sha1 ba5b39cb2 / DPDK 24.03.0 initialization... 00:36:47.141 [2024-10-08 18:44:40.431744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid708534 ] 00:36:47.400 [2024-10-08 18:44:40.496791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:47.400 [2024-10-08 18:44:40.567916] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:47.400 18:44:40 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:47.400 18:44:40 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:36:47.400 18:44:40 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:47.400 18:44:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:47.659 18:44:40 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:47.659 18:44:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:47.917 18:44:41 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:47.917 18:44:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:47.917 [2024-10-08 18:44:41.216336] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:48.176 nvme0n1 00:36:48.176 18:44:41 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:48.176 18:44:41 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:48.176 18:44:41 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:48.176 18:44:41 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:48.176 18:44:41 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:48.176 18:44:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.434 18:44:41 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:48.435 18:44:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:48.435 18:44:41 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:48.435 18:44:41 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:48.435 18:44:41 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.435 18:44:41 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:48.435 18:44:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.435 18:44:41 keyring_linux -- keyring/linux.sh@25 -- # sn=968164974 00:36:48.435 18:44:41 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:48.435 18:44:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:48.435 18:44:41 keyring_linux -- keyring/linux.sh@26 -- # [[ 968164974 == \9\6\8\1\6\4\9\7\4 ]] 00:36:48.435 18:44:41 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 968164974 00:36:48.435 18:44:41 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:48.435 18:44:41 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:48.693 Running I/O for 1 seconds... 00:36:49.628 21712.00 IOPS, 84.81 MiB/s 00:36:49.628 Latency(us) 00:36:49.628 [2024-10-08T16:44:42.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.628 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:49.628 nvme0n1 : 1.01 21712.55 84.81 0.00 0.00 5876.05 4930.80 13419.28 00:36:49.628 [2024-10-08T16:44:42.951Z] =================================================================================================================== 00:36:49.628 [2024-10-08T16:44:42.951Z] Total : 21712.55 84.81 0.00 0.00 5876.05 4930.80 13419.28 00:36:49.628 { 00:36:49.628 "results": [ 00:36:49.628 { 00:36:49.628 "job": "nvme0n1", 00:36:49.628 "core_mask": "0x2", 00:36:49.628 "workload": "randread", 00:36:49.628 "status": "finished", 00:36:49.628 "queue_depth": 128, 00:36:49.628 "io_size": 4096, 00:36:49.628 "runtime": 1.00587, 00:36:49.628 "iops": 21712.547347072683, 00:36:49.628 "mibps": 84.81463807450267, 00:36:49.628 "io_failed": 0, 00:36:49.628 "io_timeout": 0, 00:36:49.628 "avg_latency_us": 5876.046010116867, 00:36:49.628 "min_latency_us": 4930.80380952381, 00:36:49.628 "max_latency_us": 13419.27619047619 00:36:49.628 } 00:36:49.628 ], 00:36:49.629 "core_count": 1 00:36:49.629 } 00:36:49.629 18:44:42 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:49.629 18:44:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:49.887 18:44:43 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:49.887 18:44:43 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:49.887 18:44:43 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:49.887 18:44:43 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:49.887 18:44:43 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:49.887 18:44:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.146 18:44:43 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:50.146 18:44:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:50.146 18:44:43 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:50.146 18:44:43 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:50.146 18:44:43 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:36:50.146 18:44:43 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:50.146 18:44:43 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:50.146 18:44:43 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:50.146 18:44:43 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:50.146 18:44:43 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:50.146 18:44:43 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:50.146 18:44:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:50.146 [2024-10-08 18:44:43.411091] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:50.146 [2024-10-08 18:44:43.411199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30010 (107): Transport endpoint is not connected 00:36:50.146 [2024-10-08 18:44:43.412194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30010 (9): Bad file descriptor 00:36:50.146 [2024-10-08 18:44:43.413195] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:50.146 [2024-10-08 18:44:43.413205] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:50.146 [2024-10-08 18:44:43.413217] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:50.146 [2024-10-08 18:44:43.413226] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:50.146 request: 00:36:50.146 { 00:36:50.146 "name": "nvme0", 00:36:50.146 "trtype": "tcp", 00:36:50.146 "traddr": "127.0.0.1", 00:36:50.146 "adrfam": "ipv4", 00:36:50.146 "trsvcid": "4420", 00:36:50.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:50.146 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:50.146 "prchk_reftag": false, 00:36:50.146 "prchk_guard": false, 00:36:50.146 "hdgst": false, 00:36:50.146 "ddgst": false, 00:36:50.146 "psk": ":spdk-test:key1", 00:36:50.146 "allow_unrecognized_csi": false, 00:36:50.146 "method": "bdev_nvme_attach_controller", 00:36:50.146 "req_id": 1 00:36:50.146 } 00:36:50.146 Got JSON-RPC error response 00:36:50.146 response: 00:36:50.146 { 00:36:50.146 "code": -5, 00:36:50.146 "message": "Input/output error" 00:36:50.146 } 00:36:50.146 18:44:43 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:36:50.146 18:44:43 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:50.146 18:44:43 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:50.146 18:44:43 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:50.146 18:44:43 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:50.146 18:44:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:50.146 18:44:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:50.146 18:44:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:50.146 18:44:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:50.146 18:44:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:50.146 18:44:43 keyring_linux -- keyring/linux.sh@33 -- # sn=968164974 00:36:50.146 18:44:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 968164974 00:36:50.146 1 links removed 00:36:50.146 18:44:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:50.146 18:44:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:50.146 18:44:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:50.147 18:44:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:50.147 18:44:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:50.147 18:44:43 keyring_linux -- keyring/linux.sh@33 -- # sn=869495682 00:36:50.147 18:44:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 869495682 00:36:50.147 1 links removed 00:36:50.147 18:44:43 keyring_linux -- keyring/linux.sh@41 -- # killprocess 708534 00:36:50.147 18:44:43 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 708534 ']' 00:36:50.147 18:44:43 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 708534 00:36:50.147 18:44:43 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:50.147 18:44:43 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:50.147 18:44:43 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 708534 00:36:50.405 18:44:43 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:50.405 18:44:43 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:50.405 18:44:43 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 708534' 00:36:50.405 killing process with pid 708534 00:36:50.405 18:44:43 keyring_linux -- common/autotest_common.sh@969 -- # kill 708534 00:36:50.405 Received shutdown signal, test time was about 1.000000 seconds 00:36:50.405 00:36:50.405 Latency(us) 00:36:50.405 [2024-10-08T16:44:43.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:50.405 [2024-10-08T16:44:43.728Z] =================================================================================================================== 00:36:50.405 [2024-10-08T16:44:43.728Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:50.405 18:44:43 keyring_linux -- common/autotest_common.sh@974 -- # wait 708534 00:36:50.405 18:44:43 keyring_linux -- keyring/linux.sh@42 -- # killprocess 708405 00:36:50.405 18:44:43 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 708405 ']' 00:36:50.405 18:44:43 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 708405 00:36:50.405 18:44:43 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:50.405 18:44:43 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:50.405 18:44:43 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 708405 00:36:50.664 18:44:43 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:50.664 18:44:43 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:50.664 18:44:43 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 708405' 00:36:50.664 killing process with pid 708405 00:36:50.664 18:44:43 keyring_linux -- common/autotest_common.sh@969 -- # kill 708405 00:36:50.664 18:44:43 keyring_linux -- common/autotest_common.sh@974 -- # wait 708405 00:36:50.923 00:36:50.923 real 0m4.898s 00:36:50.923 user 0m8.907s 00:36:50.923 sys 0m1.517s 00:36:50.923 18:44:44 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:50.923 18:44:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:50.923 ************************************ 00:36:50.923 END TEST keyring_linux 00:36:50.923 ************************************ 00:36:50.923 18:44:44 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:36:50.923 18:44:44 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:50.923 18:44:44 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:50.923 18:44:44 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:36:50.923 18:44:44 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:36:50.923 18:44:44 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:36:50.923 18:44:44 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:50.923 18:44:44 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:50.923 18:44:44 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:50.923 18:44:44 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:36:50.923 18:44:44 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:50.923 18:44:44 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:36:50.923 18:44:44 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:50.923 18:44:44 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:50.923 18:44:44 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:36:50.923 18:44:44 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:36:50.923 18:44:44 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:36:50.923 18:44:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:50.923 18:44:44 -- common/autotest_common.sh@10 -- # set +x 00:36:50.923 18:44:44 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:36:50.923 18:44:44 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:36:50.923 18:44:44 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:36:50.923 18:44:44 -- common/autotest_common.sh@10 -- # set +x 00:36:56.195 INFO: APP EXITING 00:36:56.195 INFO: killing all VMs 00:36:56.195 INFO: killing vhost app 00:36:56.195 INFO: EXIT DONE 00:36:58.729 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:58.729 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:58.729 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:58.729 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:58.729 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:58.729 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:58.729 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:58.729 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:58.729 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:58.729 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:58.729 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:58.729 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:58.729 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:58.729 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:58.729 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:58.729 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:58.729 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:37:02.017 Cleaning 00:37:02.017 Removing: /var/run/dpdk/spdk0/config 00:37:02.017 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:02.018 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:02.018 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:02.018 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:02.018 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:02.018 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:02.018 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:02.018 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:02.018 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:02.018 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:02.018 Removing: /var/run/dpdk/spdk1/config 00:37:02.018 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:02.018 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:02.018 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:02.018 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:02.018 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:02.018 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:02.018 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:02.018 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:02.018 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:02.018 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:02.018 Removing: /var/run/dpdk/spdk2/config 00:37:02.018 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:02.018 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:02.018 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:02.018 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:02.018 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:02.018 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:02.018 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:02.018 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:02.018 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:02.018 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:02.018 Removing: /var/run/dpdk/spdk3/config 00:37:02.018 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:02.018 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:02.018 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:02.018 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:02.018 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:02.018 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:02.018 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:02.018 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:02.018 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:02.018 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:02.018 Removing: /var/run/dpdk/spdk4/config 00:37:02.018 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:02.018 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:02.018 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:02.018 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:02.018 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:02.018 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:02.018 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:02.018 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:02.018 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:02.018 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:02.018 Removing: /dev/shm/bdev_svc_trace.1 00:37:02.018 Removing: /dev/shm/nvmf_trace.0 00:37:02.018 Removing: /dev/shm/spdk_tgt_trace.pid221223 00:37:02.018 Removing: /var/run/dpdk/spdk0 00:37:02.018 Removing: /var/run/dpdk/spdk1 00:37:02.018 Removing: /var/run/dpdk/spdk2 00:37:02.018 Removing: /var/run/dpdk/spdk3 00:37:02.018 Removing: /var/run/dpdk/spdk4 00:37:02.018 Removing: /var/run/dpdk/spdk_pid218846 00:37:02.018 Removing: /var/run/dpdk/spdk_pid219970 00:37:02.018 Removing: /var/run/dpdk/spdk_pid221223 00:37:02.018 Removing: /var/run/dpdk/spdk_pid221930 00:37:02.018 Removing: /var/run/dpdk/spdk_pid222945 00:37:02.018 Removing: /var/run/dpdk/spdk_pid223188 00:37:02.018 Removing: /var/run/dpdk/spdk_pid224544 00:37:02.018 Removing: /var/run/dpdk/spdk_pid224779 00:37:02.018 Removing: /var/run/dpdk/spdk_pid225131 00:37:02.018 Removing: /var/run/dpdk/spdk_pid226837 00:37:02.018 Removing: /var/run/dpdk/spdk_pid228129 00:37:02.018 Removing: /var/run/dpdk/spdk_pid228453 00:37:02.018 Removing: /var/run/dpdk/spdk_pid228745 00:37:02.018 Removing: /var/run/dpdk/spdk_pid229062 00:37:02.018 Removing: /var/run/dpdk/spdk_pid229459 00:37:02.018 Removing: /var/run/dpdk/spdk_pid229665 00:37:02.018 Removing: /var/run/dpdk/spdk_pid229877 00:37:02.018 Removing: /var/run/dpdk/spdk_pid230232 00:37:02.018 Removing: /var/run/dpdk/spdk_pid231130 00:37:02.018 Removing: /var/run/dpdk/spdk_pid234145 00:37:02.018 Removing: /var/run/dpdk/spdk_pid234452 00:37:02.018 Removing: /var/run/dpdk/spdk_pid234835 00:37:02.018 Removing: /var/run/dpdk/spdk_pid234901 00:37:02.018 Removing: /var/run/dpdk/spdk_pid235387 00:37:02.018 Removing: /var/run/dpdk/spdk_pid235619 00:37:02.018 Removing: /var/run/dpdk/spdk_pid236014 00:37:02.018 Removing: /var/run/dpdk/spdk_pid236125 00:37:02.018 Removing: /var/run/dpdk/spdk_pid236391 00:37:02.018 Removing: /var/run/dpdk/spdk_pid236622 00:37:02.018 Removing: /var/run/dpdk/spdk_pid236880 00:37:02.018 Removing: /var/run/dpdk/spdk_pid236886 00:37:02.018 Removing: /var/run/dpdk/spdk_pid237451 00:37:02.018 Removing: /var/run/dpdk/spdk_pid237698 00:37:02.018 Removing: /var/run/dpdk/spdk_pid238000 00:37:02.018 Removing: /var/run/dpdk/spdk_pid241942 00:37:02.018 Removing: /var/run/dpdk/spdk_pid246426 00:37:02.018 Removing: /var/run/dpdk/spdk_pid256689 00:37:02.018 Removing: /var/run/dpdk/spdk_pid257381 00:37:02.018 Removing: /var/run/dpdk/spdk_pid261667 00:37:02.018 Removing: /var/run/dpdk/spdk_pid262136 00:37:02.018 Removing: /var/run/dpdk/spdk_pid266537 00:37:02.018 Removing: /var/run/dpdk/spdk_pid273033 00:37:02.018 Removing: /var/run/dpdk/spdk_pid275740 00:37:02.018 Removing: /var/run/dpdk/spdk_pid286537 00:37:02.018 Removing: /var/run/dpdk/spdk_pid295695 00:37:02.018 Removing: /var/run/dpdk/spdk_pid297433 00:37:02.018 Removing: /var/run/dpdk/spdk_pid298378 00:37:02.018 Removing: /var/run/dpdk/spdk_pid316207 00:37:02.018 Removing: /var/run/dpdk/spdk_pid320252 00:37:02.018 Removing: /var/run/dpdk/spdk_pid367152 00:37:02.018 Removing: /var/run/dpdk/spdk_pid372587 00:37:02.018 Removing: /var/run/dpdk/spdk_pid378443 00:37:02.018 Removing: /var/run/dpdk/spdk_pid384698 00:37:02.018 Removing: /var/run/dpdk/spdk_pid384700 00:37:02.018 Removing: /var/run/dpdk/spdk_pid385612 00:37:02.018 Removing: /var/run/dpdk/spdk_pid386520 00:37:02.018 Removing: /var/run/dpdk/spdk_pid387433 00:37:02.018 Removing: /var/run/dpdk/spdk_pid387923 00:37:02.018 Removing: /var/run/dpdk/spdk_pid387926 00:37:02.018 Removing: /var/run/dpdk/spdk_pid388162 00:37:02.018 Removing: /var/run/dpdk/spdk_pid388383 00:37:02.278 Removing: /var/run/dpdk/spdk_pid388385 00:37:02.278 Removing: /var/run/dpdk/spdk_pid389307 00:37:02.278 Removing: /var/run/dpdk/spdk_pid390118 00:37:02.278 Removing: /var/run/dpdk/spdk_pid390920 00:37:02.278 Removing: /var/run/dpdk/spdk_pid391600 00:37:02.278 Removing: /var/run/dpdk/spdk_pid391602 00:37:02.278 Removing: /var/run/dpdk/spdk_pid391838 00:37:02.278 Removing: /var/run/dpdk/spdk_pid393094 00:37:02.278 Removing: /var/run/dpdk/spdk_pid394300 00:37:02.278 Removing: /var/run/dpdk/spdk_pid402613 00:37:02.278 Removing: /var/run/dpdk/spdk_pid431736 00:37:02.278 Removing: /var/run/dpdk/spdk_pid436257 00:37:02.278 Removing: /var/run/dpdk/spdk_pid437919 00:37:02.278 Removing: /var/run/dpdk/spdk_pid440042 00:37:02.278 Removing: /var/run/dpdk/spdk_pid440286 00:37:02.278 Removing: /var/run/dpdk/spdk_pid440527 00:37:02.278 Removing: /var/run/dpdk/spdk_pid440772 00:37:02.278 Removing: /var/run/dpdk/spdk_pid441761 00:37:02.278 Removing: /var/run/dpdk/spdk_pid443725 00:37:02.278 Removing: /var/run/dpdk/spdk_pid444794 00:37:02.278 Removing: /var/run/dpdk/spdk_pid445243 00:37:02.278 Removing: /var/run/dpdk/spdk_pid447569 00:37:02.278 Removing: /var/run/dpdk/spdk_pid448289 00:37:02.278 Removing: /var/run/dpdk/spdk_pid449017 00:37:02.278 Removing: /var/run/dpdk/spdk_pid453267 00:37:02.278 Removing: /var/run/dpdk/spdk_pid458901 00:37:02.278 Removing: /var/run/dpdk/spdk_pid458903 00:37:02.278 Removing: /var/run/dpdk/spdk_pid458905 00:37:02.278 Removing: /var/run/dpdk/spdk_pid462711 00:37:02.278 Removing: /var/run/dpdk/spdk_pid471271 00:37:02.278 Removing: /var/run/dpdk/spdk_pid475191 00:37:02.278 Removing: /var/run/dpdk/spdk_pid481300 00:37:02.278 Removing: /var/run/dpdk/spdk_pid482602 00:37:02.278 Removing: /var/run/dpdk/spdk_pid484220 00:37:02.278 Removing: /var/run/dpdk/spdk_pid485989 00:37:02.278 Removing: /var/run/dpdk/spdk_pid490908 00:37:02.278 Removing: /var/run/dpdk/spdk_pid494998 00:37:02.278 Removing: /var/run/dpdk/spdk_pid502602 00:37:02.278 Removing: /var/run/dpdk/spdk_pid502604 00:37:02.278 Removing: /var/run/dpdk/spdk_pid507321 00:37:02.278 Removing: /var/run/dpdk/spdk_pid507553 00:37:02.278 Removing: /var/run/dpdk/spdk_pid507780 00:37:02.278 Removing: /var/run/dpdk/spdk_pid508234 00:37:02.278 Removing: /var/run/dpdk/spdk_pid508245 00:37:02.278 Removing: /var/run/dpdk/spdk_pid512962 00:37:02.278 Removing: /var/run/dpdk/spdk_pid513529 00:37:02.278 Removing: /var/run/dpdk/spdk_pid517893 00:37:02.278 Removing: /var/run/dpdk/spdk_pid520678 00:37:02.278 Removing: /var/run/dpdk/spdk_pid526273 00:37:02.278 Removing: /var/run/dpdk/spdk_pid531831 00:37:02.278 Removing: /var/run/dpdk/spdk_pid541133 00:37:02.278 Removing: /var/run/dpdk/spdk_pid548356 00:37:02.278 Removing: /var/run/dpdk/spdk_pid548360 00:37:02.278 Removing: /var/run/dpdk/spdk_pid567452 00:37:02.278 Removing: /var/run/dpdk/spdk_pid568104 00:37:02.278 Removing: /var/run/dpdk/spdk_pid568803 00:37:02.278 Removing: /var/run/dpdk/spdk_pid569499 00:37:02.278 Removing: /var/run/dpdk/spdk_pid570469 00:37:02.278 Removing: /var/run/dpdk/spdk_pid570974 00:37:02.278 Removing: /var/run/dpdk/spdk_pid571649 00:37:02.278 Removing: /var/run/dpdk/spdk_pid572350 00:37:02.278 Removing: /var/run/dpdk/spdk_pid576647 00:37:02.278 Removing: /var/run/dpdk/spdk_pid576970 00:37:02.278 Removing: /var/run/dpdk/spdk_pid583481 00:37:02.278 Removing: /var/run/dpdk/spdk_pid583708 00:37:02.537 Removing: /var/run/dpdk/spdk_pid589196 00:37:02.537 Removing: /var/run/dpdk/spdk_pid593637 00:37:02.537 Removing: /var/run/dpdk/spdk_pid603445 00:37:02.537 Removing: /var/run/dpdk/spdk_pid603916 00:37:02.537 Removing: /var/run/dpdk/spdk_pid608192 00:37:02.537 Removing: /var/run/dpdk/spdk_pid608641 00:37:02.537 Removing: /var/run/dpdk/spdk_pid612900 00:37:02.537 Removing: /var/run/dpdk/spdk_pid618665 00:37:02.537 Removing: /var/run/dpdk/spdk_pid621344 00:37:02.537 Removing: /var/run/dpdk/spdk_pid632251 00:37:02.537 Removing: /var/run/dpdk/spdk_pid641152 00:37:02.537 Removing: /var/run/dpdk/spdk_pid642813 00:37:02.537 Removing: /var/run/dpdk/spdk_pid643674 00:37:02.537 Removing: /var/run/dpdk/spdk_pid660031 00:37:02.537 Removing: /var/run/dpdk/spdk_pid664066 00:37:02.537 Removing: /var/run/dpdk/spdk_pid666752 00:37:02.537 Removing: /var/run/dpdk/spdk_pid675459 00:37:02.537 Removing: /var/run/dpdk/spdk_pid675464 00:37:02.537 Removing: /var/run/dpdk/spdk_pid680717 00:37:02.537 Removing: /var/run/dpdk/spdk_pid682684 00:37:02.537 Removing: /var/run/dpdk/spdk_pid684559 00:37:02.537 Removing: /var/run/dpdk/spdk_pid685708 00:37:02.537 Removing: /var/run/dpdk/spdk_pid687676 00:37:02.537 Removing: /var/run/dpdk/spdk_pid688781 00:37:02.537 Removing: /var/run/dpdk/spdk_pid697709 00:37:02.537 Removing: /var/run/dpdk/spdk_pid698174 00:37:02.537 Removing: /var/run/dpdk/spdk_pid698850 00:37:02.537 Removing: /var/run/dpdk/spdk_pid701119 00:37:02.537 Removing: /var/run/dpdk/spdk_pid701581 00:37:02.537 Removing: /var/run/dpdk/spdk_pid702067 00:37:02.537 Removing: /var/run/dpdk/spdk_pid706097 00:37:02.537 Removing: /var/run/dpdk/spdk_pid706274 00:37:02.537 Removing: /var/run/dpdk/spdk_pid707850 00:37:02.537 Removing: /var/run/dpdk/spdk_pid708405 00:37:02.537 Removing: /var/run/dpdk/spdk_pid708534 00:37:02.537 Clean 00:37:02.537 18:44:55 -- common/autotest_common.sh@1451 -- # return 0 00:37:02.537 18:44:55 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:37:02.537 18:44:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:02.537 18:44:55 -- common/autotest_common.sh@10 -- # set +x 00:37:02.537 18:44:55 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:37:02.537 18:44:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:02.537 18:44:55 -- common/autotest_common.sh@10 -- # set +x 00:37:02.796 18:44:55 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:02.796 18:44:55 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:02.796 18:44:55 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:02.796 18:44:55 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:37:02.796 18:44:55 -- spdk/autotest.sh@394 -- # hostname 00:37:02.796 18:44:55 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:02.796 geninfo: WARNING: invalid characters removed from testname! 00:37:24.723 18:45:16 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:26.099 18:45:19 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:28.099 18:45:20 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:29.577 18:45:22 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:31.479 18:45:24 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:33.383 18:45:26 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:35.286 18:45:28 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:35.286 18:45:28 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:37:35.286 18:45:28 -- common/autotest_common.sh@1681 -- $ lcov --version 00:37:35.286 18:45:28 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:37:35.286 18:45:28 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:37:35.286 18:45:28 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:37:35.286 18:45:28 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:37:35.286 18:45:28 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:37:35.286 18:45:28 -- scripts/common.sh@336 -- $ IFS=.-: 00:37:35.286 18:45:28 -- scripts/common.sh@336 -- $ read -ra ver1 00:37:35.286 18:45:28 -- scripts/common.sh@337 -- $ IFS=.-: 00:37:35.286 18:45:28 -- scripts/common.sh@337 -- $ read -ra ver2 00:37:35.286 18:45:28 -- scripts/common.sh@338 -- $ local 'op=<' 00:37:35.286 18:45:28 -- scripts/common.sh@340 -- $ ver1_l=2 00:37:35.286 18:45:28 -- scripts/common.sh@341 -- $ ver2_l=1 00:37:35.286 18:45:28 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:37:35.286 18:45:28 -- scripts/common.sh@344 -- $ case "$op" in 00:37:35.286 18:45:28 -- scripts/common.sh@345 -- $ : 1 00:37:35.286 18:45:28 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:37:35.286 18:45:28 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:35.286 18:45:28 -- scripts/common.sh@365 -- $ decimal 1 00:37:35.286 18:45:28 -- scripts/common.sh@353 -- $ local d=1 00:37:35.286 18:45:28 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:37:35.286 18:45:28 -- scripts/common.sh@355 -- $ echo 1 00:37:35.286 18:45:28 -- scripts/common.sh@365 -- $ ver1[v]=1 00:37:35.286 18:45:28 -- scripts/common.sh@366 -- $ decimal 2 00:37:35.286 18:45:28 -- scripts/common.sh@353 -- $ local d=2 00:37:35.286 18:45:28 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:37:35.286 18:45:28 -- scripts/common.sh@355 -- $ echo 2 00:37:35.286 18:45:28 -- scripts/common.sh@366 -- $ ver2[v]=2 00:37:35.286 18:45:28 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:37:35.286 18:45:28 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:37:35.286 18:45:28 -- scripts/common.sh@368 -- $ return 0 00:37:35.286 18:45:28 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:35.286 18:45:28 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:37:35.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.286 --rc genhtml_branch_coverage=1 00:37:35.286 --rc genhtml_function_coverage=1 00:37:35.286 --rc genhtml_legend=1 00:37:35.286 --rc geninfo_all_blocks=1 00:37:35.286 --rc geninfo_unexecuted_blocks=1 00:37:35.286 00:37:35.286 ' 00:37:35.286 18:45:28 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:37:35.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.286 --rc genhtml_branch_coverage=1 00:37:35.286 --rc genhtml_function_coverage=1 00:37:35.286 --rc genhtml_legend=1 00:37:35.286 --rc geninfo_all_blocks=1 00:37:35.286 --rc geninfo_unexecuted_blocks=1 00:37:35.286 00:37:35.286 ' 00:37:35.286 18:45:28 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:37:35.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.286 --rc genhtml_branch_coverage=1 00:37:35.286 --rc genhtml_function_coverage=1 00:37:35.286 --rc genhtml_legend=1 00:37:35.286 --rc geninfo_all_blocks=1 00:37:35.286 --rc geninfo_unexecuted_blocks=1 00:37:35.286 00:37:35.286 ' 00:37:35.286 18:45:28 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:37:35.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.286 --rc genhtml_branch_coverage=1 00:37:35.286 --rc genhtml_function_coverage=1 00:37:35.286 --rc genhtml_legend=1 00:37:35.286 --rc geninfo_all_blocks=1 00:37:35.286 --rc geninfo_unexecuted_blocks=1 00:37:35.286 00:37:35.286 ' 00:37:35.286 18:45:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:35.286 18:45:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:37:35.286 18:45:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:35.286 18:45:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:35.286 18:45:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:35.286 18:45:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.286 18:45:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.286 18:45:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.286 18:45:28 -- paths/export.sh@5 -- $ export PATH 00:37:35.286 18:45:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.286 18:45:28 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:35.286 18:45:28 -- common/autobuild_common.sh@486 -- $ date +%s 00:37:35.286 18:45:28 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728405928.XXXXXX 00:37:35.286 18:45:28 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728405928.1zUUv6 00:37:35.286 18:45:28 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:37:35.286 18:45:28 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:37:35.286 18:45:28 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:37:35.286 18:45:28 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:35.286 18:45:28 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:35.286 18:45:28 -- common/autobuild_common.sh@502 -- $ get_config_params 00:37:35.286 18:45:28 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:37:35.286 18:45:28 -- common/autotest_common.sh@10 -- $ set +x 00:37:35.286 18:45:28 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:37:35.286 18:45:28 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:37:35.286 18:45:28 -- pm/common@17 -- $ local monitor 00:37:35.286 18:45:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:35.286 18:45:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:35.286 18:45:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:35.286 18:45:28 -- pm/common@21 -- $ date +%s 00:37:35.286 18:45:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:35.286 18:45:28 -- pm/common@21 -- $ date +%s 00:37:35.286 18:45:28 -- pm/common@25 -- $ sleep 1 00:37:35.286 18:45:28 -- pm/common@21 -- $ date +%s 00:37:35.286 18:45:28 -- pm/common@21 -- $ date +%s 00:37:35.286 18:45:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728405928 00:37:35.287 18:45:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728405928 00:37:35.287 18:45:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728405928 00:37:35.287 18:45:28 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728405928 00:37:35.287 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728405928_collect-cpu-load.pm.log 00:37:35.287 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728405928_collect-vmstat.pm.log 00:37:35.287 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728405928_collect-cpu-temp.pm.log 00:37:35.287 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728405928_collect-bmc-pm.bmc.pm.log 00:37:36.224 18:45:29 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:37:36.224 18:45:29 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:37:36.224 18:45:29 -- spdk/autopackage.sh@14 -- $ timing_finish 00:37:36.224 18:45:29 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:36.224 18:45:29 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:36.224 18:45:29 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:36.224 18:45:29 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:36.224 18:45:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:36.224 18:45:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:36.224 18:45:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:36.224 18:45:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:36.224 18:45:29 -- pm/common@44 -- $ pid=719720 00:37:36.224 18:45:29 -- pm/common@50 -- $ kill -TERM 719720 00:37:36.224 18:45:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:36.224 18:45:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:36.224 18:45:29 -- pm/common@44 -- $ pid=719722 00:37:36.224 18:45:29 -- pm/common@50 -- $ kill -TERM 719722 00:37:36.224 18:45:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:36.224 18:45:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:36.224 18:45:29 -- pm/common@44 -- $ pid=719724 00:37:36.224 18:45:29 -- pm/common@50 -- $ kill -TERM 719724 00:37:36.224 18:45:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:36.224 18:45:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:36.224 18:45:29 -- pm/common@44 -- $ pid=719749 00:37:36.224 18:45:29 -- pm/common@50 -- $ sudo -E kill -TERM 719749 00:37:36.224 + [[ -n 142358 ]] 00:37:36.224 + sudo kill 142358 00:37:36.234 [Pipeline] } 00:37:36.249 [Pipeline] // stage 00:37:36.254 [Pipeline] } 00:37:36.268 [Pipeline] // timeout 00:37:36.273 [Pipeline] } 00:37:36.287 [Pipeline] // catchError 00:37:36.292 [Pipeline] } 00:37:36.306 [Pipeline] // wrap 00:37:36.312 [Pipeline] } 00:37:36.326 [Pipeline] // catchError 00:37:36.335 [Pipeline] stage 00:37:36.337 [Pipeline] { (Epilogue) 00:37:36.350 [Pipeline] catchError 00:37:36.351 [Pipeline] { 00:37:36.363 [Pipeline] echo 00:37:36.365 Cleanup processes 00:37:36.370 [Pipeline] sh 00:37:36.656 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:36.656 719879 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:36.656 720222 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:36.669 [Pipeline] sh 00:37:36.955 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:36.955 ++ grep -v 'sudo pgrep' 00:37:36.955 ++ awk '{print $1}' 00:37:36.955 + sudo kill -9 719879 00:37:36.966 [Pipeline] sh 00:37:37.247 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:49.462 [Pipeline] sh 00:37:49.748 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:49.748 Artifacts sizes are good 00:37:49.761 [Pipeline] archiveArtifacts 00:37:49.767 Archiving artifacts 00:37:49.887 [Pipeline] sh 00:37:50.171 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:50.186 [Pipeline] cleanWs 00:37:50.195 [WS-CLEANUP] Deleting project workspace... 00:37:50.195 [WS-CLEANUP] Deferred wipeout is used... 00:37:50.202 [WS-CLEANUP] done 00:37:50.203 [Pipeline] } 00:37:50.220 [Pipeline] // catchError 00:37:50.231 [Pipeline] sh 00:37:50.558 + logger -p user.info -t JENKINS-CI 00:37:50.567 [Pipeline] } 00:37:50.580 [Pipeline] // stage 00:37:50.584 [Pipeline] } 00:37:50.598 [Pipeline] // node 00:37:50.603 [Pipeline] End of Pipeline 00:37:50.639 Finished: SUCCESS